Feb 23 01:36:10 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Feb 23 01:36:10 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Feb 23 01:36:10 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Feb 23 01:36:10 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 23 01:36:10 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 23 01:36:10 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 23 01:36:10 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 23 01:36:10 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 23 01:36:10 localhost kernel: signal: max sigframe size: 1776 Feb 23 01:36:10 localhost kernel: BIOS-provided physical RAM map: Feb 23 01:36:10 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 23 01:36:10 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 23 01:36:10 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 23 01:36:10 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Feb 23 01:36:10 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Feb 23 01:36:10 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 23 01:36:10 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 23 01:36:10 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Feb 23 01:36:10 localhost kernel: NX (Execute Disable) protection: active Feb 23 01:36:10 localhost kernel: SMBIOS 2.8 present. Feb 23 01:36:10 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 23 01:36:10 localhost kernel: Hypervisor detected: KVM Feb 23 01:36:10 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 23 01:36:10 localhost kernel: kvm-clock: using sched offset of 3188538578 cycles Feb 23 01:36:10 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 23 01:36:10 localhost kernel: tsc: Detected 2799.998 MHz processor Feb 23 01:36:10 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Feb 23 01:36:10 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 23 01:36:10 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Feb 23 01:36:10 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Feb 23 01:36:10 localhost kernel: Using GB pages for direct mapping Feb 23 01:36:10 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Feb 23 01:36:10 localhost kernel: ACPI: Early table checksum verification disabled Feb 23 01:36:10 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 23 01:36:10 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 23 01:36:10 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 23 01:36:10 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 23 01:36:10 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Feb 23 01:36:10 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 23 01:36:10 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 23 01:36:10 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Feb 23 01:36:10 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Feb 23 01:36:10 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Feb 23 01:36:10 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Feb 23 01:36:10 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Feb 23 01:36:10 localhost kernel: No NUMA configuration found Feb 23 01:36:10 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Feb 23 01:36:10 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Feb 23 01:36:10 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Feb 23 01:36:10 localhost kernel: Zone ranges: Feb 23 01:36:10 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 23 01:36:10 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 23 01:36:10 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Feb 23 01:36:10 localhost kernel: Device empty Feb 23 01:36:10 localhost kernel: Movable zone start for each node Feb 23 01:36:10 localhost kernel: Early memory node ranges Feb 23 01:36:10 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 23 01:36:10 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Feb 23 01:36:10 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Feb 23 01:36:10 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Feb 23 01:36:10 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 23 01:36:10 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 23 01:36:10 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Feb 23 01:36:10 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Feb 23 01:36:10 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 23 01:36:10 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 23 01:36:10 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 23 01:36:10 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 23 01:36:10 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 23 01:36:10 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 23 01:36:10 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 23 01:36:10 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 23 01:36:10 localhost kernel: TSC deadline timer available Feb 23 01:36:10 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Feb 23 01:36:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Feb 23 01:36:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Feb 23 01:36:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Feb 23 01:36:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Feb 23 01:36:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Feb 23 01:36:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Feb 23 01:36:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Feb 23 01:36:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Feb 23 01:36:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Feb 23 01:36:10 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Feb 23 01:36:10 localhost kernel: Booting paravirtualized kernel on KVM Feb 23 01:36:10 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 23 01:36:10 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Feb 23 01:36:10 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Feb 23 01:36:10 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Feb 23 01:36:10 localhost kernel: Fallback order for Node 0: 0 Feb 23 01:36:10 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Feb 23 01:36:10 localhost kernel: Policy zone: Normal Feb 23 01:36:10 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Feb 23 01:36:10 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Feb 23 01:36:10 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 23 01:36:10 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 23 01:36:10 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 23 01:36:10 localhost kernel: software IO TLB: area num 8. Feb 23 01:36:10 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Feb 23 01:36:10 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Feb 23 01:36:10 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Feb 23 01:36:10 localhost kernel: ftrace: allocating 44803 entries in 176 pages Feb 23 01:36:10 localhost kernel: ftrace: allocated 176 pages with 3 groups Feb 23 01:36:10 localhost kernel: Dynamic Preempt: voluntary Feb 23 01:36:10 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Feb 23 01:36:10 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Feb 23 01:36:10 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Feb 23 01:36:10 localhost kernel: #011Rude variant of Tasks RCU enabled. Feb 23 01:36:10 localhost kernel: #011Tracing variant of Tasks RCU enabled. Feb 23 01:36:10 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 23 01:36:10 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Feb 23 01:36:10 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Feb 23 01:36:10 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 23 01:36:10 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Feb 23 01:36:10 localhost kernel: random: crng init done (trusting CPU's manufacturer) Feb 23 01:36:10 localhost kernel: Console: colour VGA+ 80x25 Feb 23 01:36:10 localhost kernel: printk: console [tty0] enabled Feb 23 01:36:10 localhost kernel: printk: console [ttyS0] enabled Feb 23 01:36:10 localhost kernel: ACPI: Core revision 20211217 Feb 23 01:36:10 localhost kernel: APIC: Switch to symmetric I/O mode setup Feb 23 01:36:10 localhost kernel: x2apic enabled Feb 23 01:36:10 localhost kernel: Switched APIC routing to physical x2apic. Feb 23 01:36:10 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 23 01:36:10 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Feb 23 01:36:10 localhost kernel: pid_max: default: 32768 minimum: 301 Feb 23 01:36:10 localhost kernel: LSM: Security Framework initializing Feb 23 01:36:10 localhost kernel: Yama: becoming mindful. Feb 23 01:36:10 localhost kernel: SELinux: Initializing. Feb 23 01:36:10 localhost kernel: LSM support for eBPF active Feb 23 01:36:10 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 23 01:36:10 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 23 01:36:10 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 23 01:36:10 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 23 01:36:10 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 23 01:36:10 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 23 01:36:10 localhost kernel: Spectre V2 : Mitigation: Retpolines Feb 23 01:36:10 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 23 01:36:10 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 23 01:36:10 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 23 01:36:10 localhost kernel: RETBleed: Mitigation: untrained return thunk Feb 23 01:36:10 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 23 01:36:10 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 23 01:36:10 localhost kernel: Freeing SMP alternatives memory: 36K Feb 23 01:36:10 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 23 01:36:10 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Feb 23 01:36:10 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Feb 23 01:36:10 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Feb 23 01:36:10 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Feb 23 01:36:10 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 23 01:36:10 localhost kernel: ... version: 0 Feb 23 01:36:10 localhost kernel: ... bit width: 48 Feb 23 01:36:10 localhost kernel: ... generic registers: 6 Feb 23 01:36:10 localhost kernel: ... value mask: 0000ffffffffffff Feb 23 01:36:10 localhost kernel: ... max period: 00007fffffffffff Feb 23 01:36:10 localhost kernel: ... fixed-purpose events: 0 Feb 23 01:36:10 localhost kernel: ... event mask: 000000000000003f Feb 23 01:36:10 localhost kernel: rcu: Hierarchical SRCU implementation. Feb 23 01:36:10 localhost kernel: rcu: #011Max phase no-delay instances is 400. Feb 23 01:36:10 localhost kernel: smp: Bringing up secondary CPUs ... Feb 23 01:36:10 localhost kernel: x86: Booting SMP configuration: Feb 23 01:36:10 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Feb 23 01:36:10 localhost kernel: smp: Brought up 1 node, 8 CPUs Feb 23 01:36:10 localhost kernel: smpboot: Max logical packages: 8 Feb 23 01:36:10 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Feb 23 01:36:10 localhost kernel: node 0 deferred pages initialised in 25ms Feb 23 01:36:10 localhost kernel: devtmpfs: initialized Feb 23 01:36:10 localhost kernel: x86/mm: Memory block size: 128MB Feb 23 01:36:10 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 23 01:36:10 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Feb 23 01:36:10 localhost kernel: pinctrl core: initialized pinctrl subsystem Feb 23 01:36:10 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 23 01:36:10 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Feb 23 01:36:10 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 23 01:36:10 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 23 01:36:10 localhost kernel: audit: initializing netlink subsys (disabled) Feb 23 01:36:10 localhost kernel: audit: type=2000 audit(1771828569.188:1): state=initialized audit_enabled=0 res=1 Feb 23 01:36:10 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Feb 23 01:36:10 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 23 01:36:10 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Feb 23 01:36:10 localhost kernel: cpuidle: using governor menu Feb 23 01:36:10 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Feb 23 01:36:10 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 23 01:36:10 localhost kernel: PCI: Using configuration type 1 for base access Feb 23 01:36:10 localhost kernel: PCI: Using configuration type 1 for extended access Feb 23 01:36:10 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 23 01:36:10 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Feb 23 01:36:10 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 23 01:36:10 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 23 01:36:10 localhost kernel: cryptd: max_cpu_qlen set to 1000 Feb 23 01:36:10 localhost kernel: ACPI: Added _OSI(Module Device) Feb 23 01:36:10 localhost kernel: ACPI: Added _OSI(Processor Device) Feb 23 01:36:10 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 23 01:36:10 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 23 01:36:10 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 23 01:36:10 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 23 01:36:10 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 23 01:36:10 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 23 01:36:10 localhost kernel: ACPI: Interpreter enabled Feb 23 01:36:10 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Feb 23 01:36:10 localhost kernel: ACPI: Using IOAPIC for interrupt routing Feb 23 01:36:10 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 23 01:36:10 localhost kernel: PCI: Using E820 reservations for host bridge windows Feb 23 01:36:10 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 23 01:36:10 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 23 01:36:10 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Feb 23 01:36:10 localhost kernel: acpiphp: Slot [3] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [4] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [5] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [6] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [7] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [8] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [9] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [10] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [11] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [12] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [13] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [14] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [15] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [16] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [17] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [18] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [19] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [20] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [21] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [22] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [23] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [24] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [25] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [26] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [27] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [28] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [29] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [30] registered Feb 23 01:36:10 localhost kernel: acpiphp: Slot [31] registered Feb 23 01:36:10 localhost kernel: PCI host bridge to bus 0000:00 Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 23 01:36:10 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 23 01:36:10 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 23 01:36:10 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 23 01:36:10 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Feb 23 01:36:10 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 23 01:36:10 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 23 01:36:10 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 23 01:36:10 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 23 01:36:10 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 23 01:36:10 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Feb 23 01:36:10 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 23 01:36:10 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 23 01:36:10 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 23 01:36:10 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 23 01:36:10 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 23 01:36:10 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 23 01:36:10 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 23 01:36:10 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 23 01:36:10 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 23 01:36:10 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 23 01:36:10 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 23 01:36:10 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 23 01:36:10 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 23 01:36:10 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 23 01:36:10 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 23 01:36:10 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 23 01:36:10 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 23 01:36:10 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 23 01:36:10 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 23 01:36:10 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 23 01:36:10 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 23 01:36:10 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 23 01:36:10 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Feb 23 01:36:10 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 23 01:36:10 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 23 01:36:10 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 23 01:36:10 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 23 01:36:10 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 23 01:36:10 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 23 01:36:10 localhost kernel: iommu: Default domain type: Translated Feb 23 01:36:10 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 23 01:36:10 localhost kernel: SCSI subsystem initialized Feb 23 01:36:10 localhost kernel: ACPI: bus type USB registered Feb 23 01:36:10 localhost kernel: usbcore: registered new interface driver usbfs Feb 23 01:36:10 localhost kernel: usbcore: registered new interface driver hub Feb 23 01:36:10 localhost kernel: usbcore: registered new device driver usb Feb 23 01:36:10 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Feb 23 01:36:10 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 23 01:36:10 localhost kernel: PTP clock support registered Feb 23 01:36:10 localhost kernel: EDAC MC: Ver: 3.0.0 Feb 23 01:36:10 localhost kernel: NetLabel: Initializing Feb 23 01:36:10 localhost kernel: NetLabel: domain hash size = 128 Feb 23 01:36:10 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Feb 23 01:36:10 localhost kernel: NetLabel: unlabeled traffic allowed by default Feb 23 01:36:10 localhost kernel: PCI: Using ACPI for IRQ routing Feb 23 01:36:10 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 23 01:36:10 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 23 01:36:10 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 23 01:36:10 localhost kernel: vgaarb: loaded Feb 23 01:36:10 localhost kernel: clocksource: Switched to clocksource kvm-clock Feb 23 01:36:10 localhost kernel: VFS: Disk quotas dquot_6.6.0 Feb 23 01:36:10 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 23 01:36:10 localhost kernel: pnp: PnP ACPI init Feb 23 01:36:10 localhost kernel: pnp: PnP ACPI: found 5 devices Feb 23 01:36:10 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 23 01:36:10 localhost kernel: NET: Registered PF_INET protocol family Feb 23 01:36:10 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 23 01:36:10 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Feb 23 01:36:10 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 23 01:36:10 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 23 01:36:10 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 23 01:36:10 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Feb 23 01:36:10 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Feb 23 01:36:10 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Feb 23 01:36:10 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Feb 23 01:36:10 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 23 01:36:10 localhost kernel: NET: Registered PF_XDP protocol family Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Feb 23 01:36:10 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Feb 23 01:36:10 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 23 01:36:10 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 23 01:36:10 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 23 01:36:10 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 29836 usecs Feb 23 01:36:10 localhost kernel: PCI: CLS 0 bytes, default 64 Feb 23 01:36:10 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 23 01:36:10 localhost kernel: Trying to unpack rootfs image as initramfs... Feb 23 01:36:10 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Feb 23 01:36:10 localhost kernel: ACPI: bus type thunderbolt registered Feb 23 01:36:10 localhost kernel: Initialise system trusted keyrings Feb 23 01:36:10 localhost kernel: Key type blacklist registered Feb 23 01:36:10 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Feb 23 01:36:10 localhost kernel: zbud: loaded Feb 23 01:36:10 localhost kernel: integrity: Platform Keyring initialized Feb 23 01:36:10 localhost kernel: NET: Registered PF_ALG protocol family Feb 23 01:36:10 localhost kernel: xor: automatically using best checksumming function avx Feb 23 01:36:10 localhost kernel: Key type asymmetric registered Feb 23 01:36:10 localhost kernel: Asymmetric key parser 'x509' registered Feb 23 01:36:10 localhost kernel: Running certificate verification selftests Feb 23 01:36:10 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Feb 23 01:36:10 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Feb 23 01:36:10 localhost kernel: io scheduler mq-deadline registered Feb 23 01:36:10 localhost kernel: io scheduler kyber registered Feb 23 01:36:10 localhost kernel: io scheduler bfq registered Feb 23 01:36:10 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Feb 23 01:36:10 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Feb 23 01:36:10 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Feb 23 01:36:10 localhost kernel: ACPI: button: Power Button [PWRF] Feb 23 01:36:10 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 23 01:36:10 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 23 01:36:10 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 23 01:36:10 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 23 01:36:10 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 23 01:36:10 localhost kernel: Non-volatile memory driver v1.3 Feb 23 01:36:10 localhost kernel: rdac: device handler registered Feb 23 01:36:10 localhost kernel: hp_sw: device handler registered Feb 23 01:36:10 localhost kernel: emc: device handler registered Feb 23 01:36:10 localhost kernel: alua: device handler registered Feb 23 01:36:10 localhost kernel: libphy: Fixed MDIO Bus: probed Feb 23 01:36:10 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 23 01:36:10 localhost kernel: ehci-pci: EHCI PCI platform driver Feb 23 01:36:10 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Feb 23 01:36:10 localhost kernel: ohci-pci: OHCI PCI platform driver Feb 23 01:36:10 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 23 01:36:10 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 23 01:36:10 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 23 01:36:10 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 23 01:36:10 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Feb 23 01:36:10 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Feb 23 01:36:10 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Feb 23 01:36:10 localhost kernel: usb usb1: Product: UHCI Host Controller Feb 23 01:36:10 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Feb 23 01:36:10 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Feb 23 01:36:10 localhost kernel: hub 1-0:1.0: USB hub found Feb 23 01:36:10 localhost kernel: hub 1-0:1.0: 2 ports detected Feb 23 01:36:10 localhost kernel: usbcore: registered new interface driver usbserial_generic Feb 23 01:36:10 localhost kernel: usbserial: USB Serial support registered for generic Feb 23 01:36:10 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 23 01:36:10 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 23 01:36:10 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 23 01:36:10 localhost kernel: mousedev: PS/2 mouse device common for all mice Feb 23 01:36:10 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 23 01:36:10 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 23 01:36:10 localhost kernel: rtc_cmos 00:04: registered as rtc0 Feb 23 01:36:10 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-23T06:36:09 UTC (1771828569) Feb 23 01:36:10 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 23 01:36:10 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Feb 23 01:36:10 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Feb 23 01:36:10 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Feb 23 01:36:10 localhost kernel: usbcore: registered new interface driver usbhid Feb 23 01:36:10 localhost kernel: usbhid: USB HID core driver Feb 23 01:36:10 localhost kernel: drop_monitor: Initializing network drop monitor service Feb 23 01:36:10 localhost kernel: Initializing XFRM netlink socket Feb 23 01:36:10 localhost kernel: NET: Registered PF_INET6 protocol family Feb 23 01:36:10 localhost kernel: Segment Routing with IPv6 Feb 23 01:36:10 localhost kernel: NET: Registered PF_PACKET protocol family Feb 23 01:36:10 localhost kernel: mpls_gso: MPLS GSO support Feb 23 01:36:10 localhost kernel: IPI shorthand broadcast: enabled Feb 23 01:36:10 localhost kernel: AVX2 version of gcm_enc/dec engaged. Feb 23 01:36:10 localhost kernel: AES CTR mode by8 optimization enabled Feb 23 01:36:10 localhost kernel: sched_clock: Marking stable (757792826, 182615556)->(1080345406, -139937024) Feb 23 01:36:10 localhost kernel: registered taskstats version 1 Feb 23 01:36:10 localhost kernel: Loading compiled-in X.509 certificates Feb 23 01:36:10 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Feb 23 01:36:10 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Feb 23 01:36:10 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Feb 23 01:36:10 localhost kernel: zswap: loaded using pool lzo/zbud Feb 23 01:36:10 localhost kernel: page_owner is disabled Feb 23 01:36:10 localhost kernel: Key type big_key registered Feb 23 01:36:10 localhost kernel: Freeing initrd memory: 74232K Feb 23 01:36:10 localhost kernel: Key type encrypted registered Feb 23 01:36:10 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Feb 23 01:36:10 localhost kernel: Loading compiled-in module X.509 certificates Feb 23 01:36:10 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Feb 23 01:36:10 localhost kernel: ima: Allocated hash algorithm: sha256 Feb 23 01:36:10 localhost kernel: ima: No architecture policies found Feb 23 01:36:10 localhost kernel: evm: Initialising EVM extended attributes: Feb 23 01:36:10 localhost kernel: evm: security.selinux Feb 23 01:36:10 localhost kernel: evm: security.SMACK64 (disabled) Feb 23 01:36:10 localhost kernel: evm: security.SMACK64EXEC (disabled) Feb 23 01:36:10 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Feb 23 01:36:10 localhost kernel: evm: security.SMACK64MMAP (disabled) Feb 23 01:36:10 localhost kernel: evm: security.apparmor (disabled) Feb 23 01:36:10 localhost kernel: evm: security.ima Feb 23 01:36:10 localhost kernel: evm: security.capability Feb 23 01:36:10 localhost kernel: evm: HMAC attrs: 0x1 Feb 23 01:36:10 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Feb 23 01:36:10 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Feb 23 01:36:10 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Feb 23 01:36:10 localhost kernel: usb 1-1: Product: QEMU USB Tablet Feb 23 01:36:10 localhost kernel: usb 1-1: Manufacturer: QEMU Feb 23 01:36:10 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Feb 23 01:36:10 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Feb 23 01:36:10 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Feb 23 01:36:10 localhost kernel: Freeing unused decrypted memory: 2036K Feb 23 01:36:10 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Feb 23 01:36:10 localhost kernel: Write protecting the kernel read-only data: 26624k Feb 23 01:36:10 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 23 01:36:10 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Feb 23 01:36:10 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Feb 23 01:36:10 localhost kernel: Run /init as init process Feb 23 01:36:10 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 23 01:36:10 localhost systemd[1]: Detected virtualization kvm. Feb 23 01:36:10 localhost systemd[1]: Detected architecture x86-64. Feb 23 01:36:10 localhost systemd[1]: Running in initrd. Feb 23 01:36:10 localhost systemd[1]: No hostname configured, using default hostname. Feb 23 01:36:10 localhost systemd[1]: Hostname set to . Feb 23 01:36:10 localhost systemd[1]: Initializing machine ID from VM UUID. Feb 23 01:36:10 localhost systemd[1]: Queued start job for default target Initrd Default Target. Feb 23 01:36:10 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Feb 23 01:36:10 localhost systemd[1]: Reached target Local Encrypted Volumes. Feb 23 01:36:10 localhost systemd[1]: Reached target Initrd /usr File System. Feb 23 01:36:10 localhost systemd[1]: Reached target Local File Systems. Feb 23 01:36:10 localhost systemd[1]: Reached target Path Units. Feb 23 01:36:10 localhost systemd[1]: Reached target Slice Units. Feb 23 01:36:10 localhost systemd[1]: Reached target Swaps. Feb 23 01:36:10 localhost systemd[1]: Reached target Timer Units. Feb 23 01:36:10 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Feb 23 01:36:10 localhost systemd[1]: Listening on Journal Socket (/dev/log). Feb 23 01:36:10 localhost systemd[1]: Listening on Journal Socket. Feb 23 01:36:10 localhost systemd[1]: Listening on udev Control Socket. Feb 23 01:36:10 localhost systemd[1]: Listening on udev Kernel Socket. Feb 23 01:36:10 localhost systemd[1]: Reached target Socket Units. Feb 23 01:36:10 localhost systemd[1]: Starting Create List of Static Device Nodes... Feb 23 01:36:10 localhost systemd[1]: Starting Journal Service... Feb 23 01:36:10 localhost systemd[1]: Starting Load Kernel Modules... Feb 23 01:36:10 localhost systemd[1]: Starting Create System Users... Feb 23 01:36:10 localhost systemd[1]: Starting Setup Virtual Console... Feb 23 01:36:10 localhost systemd[1]: Finished Create List of Static Device Nodes. Feb 23 01:36:10 localhost systemd[1]: Finished Load Kernel Modules. Feb 23 01:36:10 localhost systemd-journald[284]: Journal started Feb 23 01:36:10 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/8bb105a948924676ace9e931084902e3) is 8.0M, max 314.7M, 306.7M free. Feb 23 01:36:10 localhost systemd-modules-load[285]: Module 'msr' is built in Feb 23 01:36:10 localhost systemd[1]: Started Journal Service. Feb 23 01:36:10 localhost systemd[1]: Finished Setup Virtual Console. Feb 23 01:36:10 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Feb 23 01:36:10 localhost systemd[1]: Starting dracut cmdline hook... Feb 23 01:36:10 localhost systemd[1]: Starting Apply Kernel Variables... Feb 23 01:36:10 localhost systemd-sysusers[286]: Creating group 'sgx' with GID 997. Feb 23 01:36:10 localhost systemd-sysusers[286]: Creating group 'users' with GID 100. Feb 23 01:36:10 localhost systemd[1]: Finished Apply Kernel Variables. Feb 23 01:36:10 localhost systemd-sysusers[286]: Creating group 'dbus' with GID 81. Feb 23 01:36:10 localhost systemd-sysusers[286]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Feb 23 01:36:10 localhost systemd[1]: Finished Create System Users. Feb 23 01:36:10 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Feb 23 01:36:10 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 23 01:36:10 localhost dracut-cmdline[290]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Feb 23 01:36:10 localhost dracut-cmdline[290]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Feb 23 01:36:10 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Feb 23 01:36:10 localhost systemd[1]: Finished Create Volatile Files and Directories. Feb 23 01:36:10 localhost systemd[1]: Finished dracut cmdline hook. Feb 23 01:36:10 localhost systemd[1]: Starting dracut pre-udev hook... Feb 23 01:36:10 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 23 01:36:10 localhost kernel: device-mapper: uevent: version 1.0.3 Feb 23 01:36:10 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Feb 23 01:36:10 localhost kernel: RPC: Registered named UNIX socket transport module. Feb 23 01:36:10 localhost kernel: RPC: Registered udp transport module. Feb 23 01:36:10 localhost kernel: RPC: Registered tcp transport module. Feb 23 01:36:10 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 23 01:36:10 localhost rpc.statd[408]: Version 2.5.4 starting Feb 23 01:36:10 localhost rpc.statd[408]: Initializing NSM state Feb 23 01:36:10 localhost rpc.idmapd[413]: Setting log level to 0 Feb 23 01:36:10 localhost systemd[1]: Finished dracut pre-udev hook. Feb 23 01:36:10 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 23 01:36:10 localhost systemd-udevd[426]: Using default interface naming scheme 'rhel-9.0'. Feb 23 01:36:10 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 23 01:36:10 localhost systemd[1]: Starting dracut pre-trigger hook... Feb 23 01:36:10 localhost systemd[1]: Finished dracut pre-trigger hook. Feb 23 01:36:10 localhost systemd[1]: Starting Coldplug All udev Devices... Feb 23 01:36:10 localhost systemd[1]: Finished Coldplug All udev Devices. Feb 23 01:36:10 localhost systemd[1]: Reached target System Initialization. Feb 23 01:36:10 localhost systemd[1]: Reached target Basic System. Feb 23 01:36:10 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Feb 23 01:36:10 localhost systemd[1]: Reached target Network. Feb 23 01:36:10 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Feb 23 01:36:10 localhost systemd[1]: Starting dracut initqueue hook... Feb 23 01:36:10 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Feb 23 01:36:10 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 23 01:36:10 localhost kernel: scsi host0: ata_piix Feb 23 01:36:10 localhost kernel: GPT:20971519 != 838860799 Feb 23 01:36:10 localhost kernel: scsi host1: ata_piix Feb 23 01:36:10 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Feb 23 01:36:10 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Feb 23 01:36:10 localhost kernel: GPT:20971519 != 838860799 Feb 23 01:36:10 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Feb 23 01:36:10 localhost kernel: vda: vda1 vda2 vda3 vda4 Feb 23 01:36:10 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Feb 23 01:36:10 localhost systemd-udevd[444]: Network interface NamePolicy= disabled on kernel command line. Feb 23 01:36:10 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Feb 23 01:36:10 localhost systemd[1]: Reached target Initrd Root Device. Feb 23 01:36:11 localhost kernel: ata1: found unknown device (class 0) Feb 23 01:36:11 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 23 01:36:11 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 23 01:36:11 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Feb 23 01:36:11 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 23 01:36:11 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 23 01:36:11 localhost systemd[1]: Finished dracut initqueue hook. Feb 23 01:36:11 localhost systemd[1]: Reached target Preparation for Remote File Systems. Feb 23 01:36:11 localhost systemd[1]: Reached target Remote Encrypted Volumes. Feb 23 01:36:11 localhost systemd[1]: Reached target Remote File Systems. Feb 23 01:36:11 localhost systemd[1]: Starting dracut pre-mount hook... Feb 23 01:36:11 localhost systemd[1]: Finished dracut pre-mount hook. Feb 23 01:36:11 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Feb 23 01:36:11 localhost systemd-fsck[512]: /usr/sbin/fsck.xfs: XFS file system. Feb 23 01:36:11 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Feb 23 01:36:11 localhost systemd[1]: Mounting /sysroot... Feb 23 01:36:11 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Feb 23 01:36:11 localhost kernel: XFS (vda4): Mounting V5 Filesystem Feb 23 01:36:11 localhost kernel: XFS (vda4): Ending clean mount Feb 23 01:36:11 localhost systemd[1]: Mounted /sysroot. Feb 23 01:36:11 localhost systemd[1]: Reached target Initrd Root File System. Feb 23 01:36:11 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Feb 23 01:36:11 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Feb 23 01:36:11 localhost systemd[1]: Reached target Initrd File Systems. Feb 23 01:36:11 localhost systemd[1]: Reached target Initrd Default Target. Feb 23 01:36:11 localhost systemd[1]: Starting dracut mount hook... Feb 23 01:36:11 localhost systemd[1]: Finished dracut mount hook. Feb 23 01:36:11 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Feb 23 01:36:11 localhost rpc.idmapd[413]: exiting on signal 15 Feb 23 01:36:11 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Feb 23 01:36:11 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Feb 23 01:36:11 localhost systemd[1]: Stopped target Network. Feb 23 01:36:11 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Feb 23 01:36:11 localhost systemd[1]: Stopped target Timer Units. Feb 23 01:36:11 localhost systemd[1]: dbus.socket: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Feb 23 01:36:11 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Feb 23 01:36:11 localhost systemd[1]: Stopped target Initrd Default Target. Feb 23 01:36:11 localhost systemd[1]: Stopped target Basic System. Feb 23 01:36:11 localhost systemd[1]: Stopped target Initrd Root Device. Feb 23 01:36:11 localhost systemd[1]: Stopped target Initrd /usr File System. Feb 23 01:36:11 localhost systemd[1]: Stopped target Path Units. Feb 23 01:36:11 localhost systemd[1]: Stopped target Remote File Systems. Feb 23 01:36:11 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Feb 23 01:36:11 localhost systemd[1]: Stopped target Slice Units. Feb 23 01:36:11 localhost systemd[1]: Stopped target Socket Units. Feb 23 01:36:11 localhost systemd[1]: Stopped target System Initialization. Feb 23 01:36:11 localhost systemd[1]: Stopped target Local File Systems. Feb 23 01:36:11 localhost systemd[1]: Stopped target Swaps. Feb 23 01:36:11 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped dracut mount hook. Feb 23 01:36:11 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped dracut pre-mount hook. Feb 23 01:36:11 localhost systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 01:36:11 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 01:36:11 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped dracut initqueue hook. Feb 23 01:36:11 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 23 01:36:11 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 01:36:11 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 01:36:11 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped Coldplug All udev Devices. Feb 23 01:36:11 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped dracut pre-trigger hook. Feb 23 01:36:11 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Feb 23 01:36:11 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped Setup Virtual Console. Feb 23 01:36:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Feb 23 01:36:11 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Feb 23 01:36:11 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Closed udev Control Socket. Feb 23 01:36:11 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Closed udev Kernel Socket. Feb 23 01:36:11 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped dracut pre-udev hook. Feb 23 01:36:11 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped dracut cmdline hook. Feb 23 01:36:11 localhost systemd[1]: Starting Cleanup udev Database... Feb 23 01:36:11 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 01:36:11 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped Create List of Static Device Nodes. Feb 23 01:36:11 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Stopped Create System Users. Feb 23 01:36:11 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 23 01:36:11 localhost systemd[1]: Finished Cleanup udev Database. Feb 23 01:36:11 localhost systemd[1]: Reached target Switch Root. Feb 23 01:36:11 localhost systemd[1]: Starting Switch Root... Feb 23 01:36:11 localhost systemd[1]: Switching root. Feb 23 01:36:11 localhost systemd-journald[284]: Journal stopped Feb 23 01:36:12 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd). Feb 23 01:36:12 localhost kernel: audit: type=1404 audit(1771828572.114:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Feb 23 01:36:12 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 01:36:12 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 01:36:12 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 01:36:12 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 01:36:12 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 01:36:12 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 01:36:12 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 01:36:12 localhost kernel: audit: type=1403 audit(1771828572.228:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 23 01:36:12 localhost systemd[1]: Successfully loaded SELinux policy in 119.757ms. Feb 23 01:36:12 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.763ms. Feb 23 01:36:12 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 23 01:36:12 localhost systemd[1]: Detected virtualization kvm. Feb 23 01:36:12 localhost systemd[1]: Detected architecture x86-64. Feb 23 01:36:12 localhost systemd-rc-local-generator[582]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 01:36:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 01:36:12 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 23 01:36:12 localhost systemd[1]: Stopped Switch Root. Feb 23 01:36:12 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 23 01:36:12 localhost systemd[1]: Created slice Slice /system/getty. Feb 23 01:36:12 localhost systemd[1]: Created slice Slice /system/modprobe. Feb 23 01:36:12 localhost systemd[1]: Created slice Slice /system/serial-getty. Feb 23 01:36:12 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Feb 23 01:36:12 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Feb 23 01:36:12 localhost systemd[1]: Created slice User and Session Slice. Feb 23 01:36:12 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Feb 23 01:36:12 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Feb 23 01:36:12 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Feb 23 01:36:12 localhost systemd[1]: Reached target Local Encrypted Volumes. Feb 23 01:36:12 localhost systemd[1]: Stopped target Switch Root. Feb 23 01:36:12 localhost systemd[1]: Stopped target Initrd File Systems. Feb 23 01:36:12 localhost systemd[1]: Stopped target Initrd Root File System. Feb 23 01:36:12 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Feb 23 01:36:12 localhost systemd[1]: Reached target Path Units. Feb 23 01:36:12 localhost systemd[1]: Reached target rpc_pipefs.target. Feb 23 01:36:12 localhost systemd[1]: Reached target Slice Units. Feb 23 01:36:12 localhost systemd[1]: Reached target Swaps. Feb 23 01:36:12 localhost systemd[1]: Reached target Local Verity Protected Volumes. Feb 23 01:36:12 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Feb 23 01:36:12 localhost systemd[1]: Reached target RPC Port Mapper. Feb 23 01:36:12 localhost systemd[1]: Listening on Process Core Dump Socket. Feb 23 01:36:12 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Feb 23 01:36:12 localhost systemd[1]: Listening on udev Control Socket. Feb 23 01:36:12 localhost systemd[1]: Listening on udev Kernel Socket. Feb 23 01:36:12 localhost systemd[1]: Mounting Huge Pages File System... Feb 23 01:36:12 localhost systemd[1]: Mounting POSIX Message Queue File System... Feb 23 01:36:12 localhost systemd[1]: Mounting Kernel Debug File System... Feb 23 01:36:12 localhost systemd[1]: Mounting Kernel Trace File System... Feb 23 01:36:12 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 23 01:36:12 localhost systemd[1]: Starting Create List of Static Device Nodes... Feb 23 01:36:12 localhost systemd[1]: Starting Load Kernel Module configfs... Feb 23 01:36:12 localhost systemd[1]: Starting Load Kernel Module drm... Feb 23 01:36:12 localhost systemd[1]: Starting Load Kernel Module fuse... Feb 23 01:36:12 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Feb 23 01:36:12 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 23 01:36:12 localhost systemd[1]: Stopped File System Check on Root Device. Feb 23 01:36:12 localhost systemd[1]: Stopped Journal Service. Feb 23 01:36:12 localhost systemd[1]: Starting Journal Service... Feb 23 01:36:12 localhost systemd[1]: Starting Load Kernel Modules... Feb 23 01:36:12 localhost systemd[1]: Starting Generate network units from Kernel command line... Feb 23 01:36:12 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Feb 23 01:36:12 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Feb 23 01:36:12 localhost systemd[1]: Starting Coldplug All udev Devices... Feb 23 01:36:12 localhost kernel: fuse: init (API version 7.36) Feb 23 01:36:12 localhost systemd[1]: Mounted Huge Pages File System. Feb 23 01:36:12 localhost systemd[1]: Mounted POSIX Message Queue File System. Feb 23 01:36:12 localhost systemd-journald[618]: Journal started Feb 23 01:36:12 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/c0212a8b024a111cfc61293864f36c87) is 8.0M, max 314.7M, 306.7M free. Feb 23 01:36:12 localhost systemd[1]: Queued start job for default target Multi-User System. Feb 23 01:36:12 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Feb 23 01:36:12 localhost systemd-modules-load[619]: Module 'msr' is built in Feb 23 01:36:12 localhost systemd[1]: Started Journal Service. Feb 23 01:36:12 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Feb 23 01:36:12 localhost systemd[1]: Mounted Kernel Debug File System. Feb 23 01:36:12 localhost systemd[1]: Mounted Kernel Trace File System. Feb 23 01:36:12 localhost systemd[1]: Finished Create List of Static Device Nodes. Feb 23 01:36:12 localhost kernel: ACPI: bus type drm_connector registered Feb 23 01:36:12 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 23 01:36:12 localhost systemd[1]: Finished Load Kernel Module configfs. Feb 23 01:36:12 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 23 01:36:12 localhost systemd[1]: Finished Load Kernel Module drm. Feb 23 01:36:12 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 23 01:36:12 localhost systemd[1]: Finished Load Kernel Module fuse. Feb 23 01:36:12 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Feb 23 01:36:12 localhost systemd[1]: Finished Load Kernel Modules. Feb 23 01:36:12 localhost systemd[1]: Finished Generate network units from Kernel command line. Feb 23 01:36:12 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Feb 23 01:36:12 localhost systemd[1]: Mounting FUSE Control File System... Feb 23 01:36:12 localhost systemd[1]: Mounting Kernel Configuration File System... Feb 23 01:36:12 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Feb 23 01:36:12 localhost systemd[1]: Starting Rebuild Hardware Database... Feb 23 01:36:12 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Feb 23 01:36:12 localhost systemd[1]: Starting Load/Save Random Seed... Feb 23 01:36:12 localhost systemd[1]: Starting Apply Kernel Variables... Feb 23 01:36:12 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/c0212a8b024a111cfc61293864f36c87) is 8.0M, max 314.7M, 306.7M free. Feb 23 01:36:12 localhost systemd-journald[618]: Received client request to flush runtime journal. Feb 23 01:36:12 localhost systemd[1]: Starting Create System Users... Feb 23 01:36:12 localhost systemd[1]: Mounted FUSE Control File System. Feb 23 01:36:12 localhost systemd[1]: Mounted Kernel Configuration File System. Feb 23 01:36:12 localhost systemd[1]: Finished Coldplug All udev Devices. Feb 23 01:36:12 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Feb 23 01:36:12 localhost systemd[1]: Finished Apply Kernel Variables. Feb 23 01:36:12 localhost systemd[1]: Finished Load/Save Random Seed. Feb 23 01:36:12 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Feb 23 01:36:12 localhost systemd-sysusers[631]: Creating group 'sgx' with GID 989. Feb 23 01:36:12 localhost systemd-sysusers[631]: Creating group 'systemd-oom' with GID 988. Feb 23 01:36:12 localhost systemd-sysusers[631]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Feb 23 01:36:13 localhost systemd[1]: Finished Create System Users. Feb 23 01:36:13 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Feb 23 01:36:13 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Feb 23 01:36:13 localhost systemd[1]: Reached target Preparation for Local File Systems. Feb 23 01:36:13 localhost systemd[1]: Set up automount EFI System Partition Automount. Feb 23 01:36:13 localhost systemd[1]: Finished Rebuild Hardware Database. Feb 23 01:36:13 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 23 01:36:13 localhost systemd-udevd[635]: Using default interface naming scheme 'rhel-9.0'. Feb 23 01:36:13 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 23 01:36:13 localhost systemd[1]: Starting Load Kernel Module configfs... Feb 23 01:36:13 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 23 01:36:13 localhost systemd[1]: Finished Load Kernel Module configfs. Feb 23 01:36:13 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Feb 23 01:36:13 localhost systemd-udevd[642]: Network interface NamePolicy= disabled on kernel command line. Feb 23 01:36:13 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Feb 23 01:36:13 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Feb 23 01:36:13 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Feb 23 01:36:13 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Feb 23 01:36:13 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 23 01:36:13 localhost systemd-fsck[678]: fsck.fat 4.2 (2021-01-31) Feb 23 01:36:13 localhost systemd-fsck[678]: /dev/vda2: 12 files, 1782/51145 clusters Feb 23 01:36:13 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Feb 23 01:36:13 localhost kernel: SVM: TSC scaling supported Feb 23 01:36:13 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 23 01:36:13 localhost kernel: kvm: Nested Virtualization enabled Feb 23 01:36:13 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 23 01:36:13 localhost kernel: SVM: kvm: Nested Paging enabled Feb 23 01:36:13 localhost kernel: SVM: LBR virtualization supported Feb 23 01:36:13 localhost kernel: Console: switching to colour dummy device 80x25 Feb 23 01:36:13 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 23 01:36:13 localhost kernel: [drm] features: -context_init Feb 23 01:36:13 localhost kernel: [drm] number of scanouts: 1 Feb 23 01:36:13 localhost kernel: [drm] number of cap sets: 0 Feb 23 01:36:13 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Feb 23 01:36:13 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Feb 23 01:36:13 localhost kernel: Console: switching to colour frame buffer device 128x48 Feb 23 01:36:13 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 23 01:36:13 localhost systemd[1]: Mounting /boot... Feb 23 01:36:13 localhost kernel: XFS (vda3): Mounting V5 Filesystem Feb 23 01:36:13 localhost kernel: XFS (vda3): Ending clean mount Feb 23 01:36:13 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Feb 23 01:36:13 localhost systemd[1]: Mounted /boot. Feb 23 01:36:13 localhost systemd[1]: Mounting /boot/efi... Feb 23 01:36:13 localhost systemd[1]: Mounted /boot/efi. Feb 23 01:36:13 localhost systemd[1]: Reached target Local File Systems. Feb 23 01:36:13 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Feb 23 01:36:13 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Feb 23 01:36:13 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 23 01:36:13 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 23 01:36:13 localhost systemd[1]: Starting Automatic Boot Loader Update... Feb 23 01:36:13 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Feb 23 01:36:13 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 23 01:36:13 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 706 (bootctl) Feb 23 01:36:13 localhost systemd[1]: Starting File System Check on /dev/vda2... Feb 23 01:36:14 localhost systemd[1]: Finished File System Check on /dev/vda2. Feb 23 01:36:14 localhost systemd[1]: Mounting EFI System Partition Automount... Feb 23 01:36:14 localhost systemd[1]: Mounted EFI System Partition Automount. Feb 23 01:36:14 localhost systemd[1]: Finished Automatic Boot Loader Update. Feb 23 01:36:14 localhost systemd[1]: Finished Create Volatile Files and Directories. Feb 23 01:36:14 localhost systemd[1]: Starting Security Auditing Service... Feb 23 01:36:14 localhost systemd[1]: Starting RPC Bind... Feb 23 01:36:14 localhost systemd[1]: Starting Rebuild Journal Catalog... Feb 23 01:36:14 localhost systemd[1]: Finished Rebuild Journal Catalog. Feb 23 01:36:14 localhost auditd[725]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Feb 23 01:36:14 localhost auditd[725]: Init complete, auditd 3.0.7 listening for events (startup state enable) Feb 23 01:36:14 localhost systemd[1]: Started RPC Bind. Feb 23 01:36:14 localhost augenrules[730]: /sbin/augenrules: No change Feb 23 01:36:14 localhost augenrules[740]: No rules Feb 23 01:36:14 localhost augenrules[740]: enabled 1 Feb 23 01:36:14 localhost augenrules[740]: failure 1 Feb 23 01:36:14 localhost augenrules[740]: pid 725 Feb 23 01:36:14 localhost augenrules[740]: rate_limit 0 Feb 23 01:36:14 localhost augenrules[740]: backlog_limit 8192 Feb 23 01:36:14 localhost augenrules[740]: lost 0 Feb 23 01:36:14 localhost augenrules[740]: backlog 4 Feb 23 01:36:14 localhost augenrules[740]: backlog_wait_time 60000 Feb 23 01:36:14 localhost augenrules[740]: backlog_wait_time_actual 0 Feb 23 01:36:14 localhost augenrules[740]: enabled 1 Feb 23 01:36:14 localhost augenrules[740]: failure 1 Feb 23 01:36:14 localhost augenrules[740]: pid 725 Feb 23 01:36:14 localhost augenrules[740]: rate_limit 0 Feb 23 01:36:14 localhost augenrules[740]: backlog_limit 8192 Feb 23 01:36:14 localhost augenrules[740]: lost 0 Feb 23 01:36:14 localhost augenrules[740]: backlog 4 Feb 23 01:36:14 localhost augenrules[740]: backlog_wait_time 60000 Feb 23 01:36:14 localhost augenrules[740]: backlog_wait_time_actual 0 Feb 23 01:36:14 localhost augenrules[740]: enabled 1 Feb 23 01:36:14 localhost augenrules[740]: failure 1 Feb 23 01:36:14 localhost augenrules[740]: pid 725 Feb 23 01:36:14 localhost augenrules[740]: rate_limit 0 Feb 23 01:36:14 localhost augenrules[740]: backlog_limit 8192 Feb 23 01:36:14 localhost augenrules[740]: lost 0 Feb 23 01:36:14 localhost augenrules[740]: backlog 0 Feb 23 01:36:14 localhost augenrules[740]: backlog_wait_time 60000 Feb 23 01:36:14 localhost augenrules[740]: backlog_wait_time_actual 0 Feb 23 01:36:14 localhost systemd[1]: Started Security Auditing Service. Feb 23 01:36:14 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Feb 23 01:36:14 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Feb 23 01:36:14 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Feb 23 01:36:14 localhost systemd[1]: Starting Update is Completed... Feb 23 01:36:14 localhost systemd[1]: Finished Update is Completed. Feb 23 01:36:14 localhost systemd[1]: Reached target System Initialization. Feb 23 01:36:14 localhost systemd[1]: Started dnf makecache --timer. Feb 23 01:36:14 localhost systemd[1]: Started Daily rotation of log files. Feb 23 01:36:14 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Feb 23 01:36:14 localhost systemd[1]: Reached target Timer Units. Feb 23 01:36:14 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Feb 23 01:36:14 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Feb 23 01:36:14 localhost systemd[1]: Reached target Socket Units. Feb 23 01:36:14 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Feb 23 01:36:14 localhost systemd[1]: Starting D-Bus System Message Bus... Feb 23 01:36:14 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 23 01:36:14 localhost systemd[1]: Started D-Bus System Message Bus. Feb 23 01:36:14 localhost systemd[1]: Reached target Basic System. Feb 23 01:36:14 localhost journal[750]: Ready Feb 23 01:36:14 localhost systemd[1]: Starting NTP client/server... Feb 23 01:36:14 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Feb 23 01:36:14 localhost systemd[1]: Started irqbalance daemon. Feb 23 01:36:14 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Feb 23 01:36:14 localhost systemd[1]: Starting System Logging Service... Feb 23 01:36:14 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 01:36:14 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 01:36:14 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 01:36:14 localhost systemd[1]: Reached target sshd-keygen.target. Feb 23 01:36:14 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Feb 23 01:36:14 localhost systemd[1]: Reached target User and Group Name Lookups. Feb 23 01:36:14 localhost systemd[1]: Starting User Login Management... Feb 23 01:36:14 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Feb 23 01:36:14 localhost systemd[1]: Started System Logging Service. Feb 23 01:36:14 localhost rsyslogd[758]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="758" x-info="https://www.rsyslog.com"] start Feb 23 01:36:14 localhost rsyslogd[758]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Feb 23 01:36:14 localhost chronyd[765]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 23 01:36:14 localhost chronyd[765]: Using right/UTC timezone to obtain leap second data Feb 23 01:36:14 localhost chronyd[765]: Loaded seccomp filter (level 2) Feb 23 01:36:14 localhost systemd[1]: Started NTP client/server. Feb 23 01:36:14 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 01:36:14 localhost systemd-logind[759]: New seat seat0. Feb 23 01:36:14 localhost systemd-logind[759]: Watching system buttons on /dev/input/event0 (Power Button) Feb 23 01:36:14 localhost systemd-logind[759]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 23 01:36:14 localhost systemd[1]: Started User Login Management. Feb 23 01:36:15 localhost cloud-init[769]: Cloud-init v. 22.1-9.el9 running 'init-local' at Mon, 23 Feb 2026 06:36:15 +0000. Up 6.36 seconds. Feb 23 01:36:15 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpw7udcb1f.mount: Deactivated successfully. Feb 23 01:36:15 localhost systemd[1]: Starting Hostname Service... Feb 23 01:36:15 localhost systemd[1]: Started Hostname Service. Feb 23 01:36:15 localhost systemd-hostnamed[783]: Hostname set to (static) Feb 23 01:36:15 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Feb 23 01:36:15 localhost systemd[1]: Reached target Preparation for Network. Feb 23 01:36:15 localhost systemd[1]: Starting Network Manager... Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.8749] NetworkManager (version 1.42.2-1.el9) is starting... (boot:6fe52b5e-8cb4-41cb-a63c-ff39348686f8) Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.8754] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.8789] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Feb 23 01:36:15 localhost systemd[1]: Started Network Manager. Feb 23 01:36:15 localhost systemd[1]: Reached target Network. Feb 23 01:36:15 localhost systemd[1]: Starting Network Manager Wait Online... Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.8877] manager[0x558364fa7020]: monitoring kernel firmware directory '/lib/firmware'. Feb 23 01:36:15 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.8904] hostname: hostname: using hostnamed Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.8904] hostname: static hostname changed from (none) to "np0005626465.novalocal" Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.8915] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Feb 23 01:36:15 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Feb 23 01:36:15 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 23 01:36:15 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Feb 23 01:36:15 localhost systemd[1]: Started GSSAPI Proxy Daemon. Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9079] manager[0x558364fa7020]: rfkill: Wi-Fi hardware radio set enabled Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9081] manager[0x558364fa7020]: rfkill: WWAN hardware radio set enabled Feb 23 01:36:15 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9165] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9166] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9180] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9180] manager: Networking is enabled by state file Feb 23 01:36:15 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 23 01:36:15 localhost systemd[1]: Reached target NFS client services. Feb 23 01:36:15 localhost systemd[1]: Reached target Preparation for Remote File Systems. Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9234] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9234] settings: Loaded settings plugin: keyfile (internal) Feb 23 01:36:15 localhost systemd[1]: Reached target Remote File Systems. Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9264] dhcp: init: Using DHCP client 'internal' Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9267] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9281] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9286] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Feb 23 01:36:15 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9294] device (lo): Activation: starting connection 'lo' (b73b4342-6ab2-4a2b-bc6d-8fd4e416493b) Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9302] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9305] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 01:36:15 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9358] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9361] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9363] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9366] device (eth0): carrier: link connected Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9368] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9372] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9389] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9408] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9409] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9412] manager: NetworkManager state is now CONNECTING Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9414] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9425] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9429] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Feb 23 01:36:15 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9472] dhcp4 (eth0): state changed new lease, address=38.102.83.142 Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9476] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9505] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9512] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9515] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9522] device (lo): Activation: successful, device activated. Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9546] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9548] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9552] manager: NetworkManager state is now CONNECTED_SITE Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9555] device (eth0): Activation: successful, device activated. Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9560] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 23 01:36:15 localhost NetworkManager[788]: [1771828575.9565] manager: startup complete Feb 23 01:36:15 localhost systemd[1]: Finished Network Manager Wait Online. Feb 23 01:36:15 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Feb 23 01:36:16 localhost cloud-init[889]: Cloud-init v. 22.1-9.el9 running 'init' at Mon, 23 Feb 2026 06:36:16 +0000. Up 7.39 seconds. Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | eth0 | True | 38.102.83.142 | 255.255.255.0 | global | fa:16:3e:d6:2e:8d | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | eth0 | True | fe80::f816:3eff:fed6:2e8d/64 | . | link | fa:16:3e:d6:2e:8d | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | lo | True | ::1/128 | . | host | . | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +-------+-------------+---------+-----------+-------+ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | Route | Destination | Gateway | Interface | Flags | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +-------+-------------+---------+-----------+-------+ Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: | 3 | multicast | :: | eth0 | U | Feb 23 01:36:16 localhost cloud-init[889]: ci-info: +-------+-------------+---------+-----------+-------+ Feb 23 01:36:16 localhost systemd[1]: Starting Authorization Manager... Feb 23 01:36:16 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 23 01:36:16 localhost polkitd[1043]: Started polkitd version 0.117 Feb 23 01:36:16 localhost systemd[1]: Started Authorization Manager. Feb 23 01:36:19 localhost cloud-init[889]: Generating public/private rsa key pair. Feb 23 01:36:19 localhost cloud-init[889]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Feb 23 01:36:19 localhost cloud-init[889]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Feb 23 01:36:19 localhost cloud-init[889]: The key fingerprint is: Feb 23 01:36:19 localhost cloud-init[889]: SHA256:MEzOX72Dmre+5sG31cVlN5853459haWV4YCHUM0UT8E root@np0005626465.novalocal Feb 23 01:36:19 localhost cloud-init[889]: The key's randomart image is: Feb 23 01:36:19 localhost cloud-init[889]: +---[RSA 3072]----+ Feb 23 01:36:19 localhost cloud-init[889]: | . .o.*ooo.| Feb 23 01:36:19 localhost cloud-init[889]: | = + =oE | Feb 23 01:36:19 localhost cloud-init[889]: | * . o oo*| Feb 23 01:36:19 localhost cloud-init[889]: | + . . . =X| Feb 23 01:36:19 localhost cloud-init[889]: | S . o O+| Feb 23 01:36:19 localhost cloud-init[889]: | + .o.*| Feb 23 01:36:19 localhost cloud-init[889]: | o + . . =| Feb 23 01:36:19 localhost cloud-init[889]: | ..+ o +.| Feb 23 01:36:19 localhost cloud-init[889]: | +=.. . +| Feb 23 01:36:19 localhost cloud-init[889]: +----[SHA256]-----+ Feb 23 01:36:19 localhost cloud-init[889]: Generating public/private ecdsa key pair. Feb 23 01:36:19 localhost cloud-init[889]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Feb 23 01:36:19 localhost cloud-init[889]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Feb 23 01:36:19 localhost cloud-init[889]: The key fingerprint is: Feb 23 01:36:19 localhost cloud-init[889]: SHA256:uJY9wK9HxY3T23ruxlQ4MYCaYA9kPHGCW4EscyNqBrM root@np0005626465.novalocal Feb 23 01:36:19 localhost cloud-init[889]: The key's randomart image is: Feb 23 01:36:19 localhost cloud-init[889]: +---[ECDSA 256]---+ Feb 23 01:36:19 localhost cloud-init[889]: | . =*o. ... | Feb 23 01:36:19 localhost cloud-init[889]: |o + *.Bo . o | Feb 23 01:36:19 localhost cloud-init[889]: |.+ = = = + + + | Feb 23 01:36:19 localhost cloud-init[889]: |Eo .. .+ = o o .| Feb 23 01:36:19 localhost cloud-init[889]: |o + S. . o o | Feb 23 01:36:19 localhost cloud-init[889]: | *. . o | Feb 23 01:36:19 localhost cloud-init[889]: | +.+ + | Feb 23 01:36:19 localhost cloud-init[889]: | . ... . + | Feb 23 01:36:19 localhost cloud-init[889]: | .. =o | Feb 23 01:36:19 localhost cloud-init[889]: +----[SHA256]-----+ Feb 23 01:36:19 localhost cloud-init[889]: Generating public/private ed25519 key pair. Feb 23 01:36:19 localhost cloud-init[889]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Feb 23 01:36:19 localhost cloud-init[889]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Feb 23 01:36:19 localhost cloud-init[889]: The key fingerprint is: Feb 23 01:36:19 localhost cloud-init[889]: SHA256:nt92WKfAh+tl0nICLgaKG4uXGaqpJtr5P3+hFmifP+U root@np0005626465.novalocal Feb 23 01:36:19 localhost cloud-init[889]: The key's randomart image is: Feb 23 01:36:19 localhost cloud-init[889]: +--[ED25519 256]--+ Feb 23 01:36:19 localhost cloud-init[889]: | | Feb 23 01:36:19 localhost cloud-init[889]: | | Feb 23 01:36:19 localhost cloud-init[889]: | | Feb 23 01:36:19 localhost cloud-init[889]: | | Feb 23 01:36:19 localhost cloud-init[889]: | ..S .. . | Feb 23 01:36:19 localhost cloud-init[889]: | .. .oo.o..=.o .| Feb 23 01:36:19 localhost cloud-init[889]: | .o+.. .=+.++*=o | Feb 23 01:36:19 localhost cloud-init[889]: |o=+= ..=oo.EBo | Feb 23 01:36:19 localhost cloud-init[889]: |%.=....+.oo+o. | Feb 23 01:36:19 localhost cloud-init[889]: +----[SHA256]-----+ Feb 23 01:36:19 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Feb 23 01:36:19 localhost systemd[1]: Reached target Cloud-config availability. Feb 23 01:36:19 localhost systemd[1]: Reached target Network is Online. Feb 23 01:36:19 localhost sshd[1132]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:19 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Feb 23 01:36:19 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Feb 23 01:36:19 localhost systemd[1]: Starting Crash recovery kernel arming... Feb 23 01:36:19 localhost systemd[1]: Starting Notify NFS peers of a restart... Feb 23 01:36:19 localhost systemd[1]: Starting OpenSSH server daemon... Feb 23 01:36:19 localhost systemd[1]: Starting Permit User Sessions... Feb 23 01:36:19 localhost systemd[1]: Finished Permit User Sessions. Feb 23 01:36:19 localhost sm-notify[1131]: Version 2.5.4 starting Feb 23 01:36:19 localhost systemd[1]: Started OpenSSH server daemon. Feb 23 01:36:19 localhost systemd[1]: Started Notify NFS peers of a restart. Feb 23 01:36:19 localhost systemd[1]: Started Command Scheduler. Feb 23 01:36:19 localhost systemd[1]: Started Getty on tty1. Feb 23 01:36:19 localhost systemd[1]: Started Serial Getty on ttyS0. Feb 23 01:36:19 localhost systemd[1]: Reached target Login Prompts. Feb 23 01:36:19 localhost systemd[1]: Reached target Multi-User System. Feb 23 01:36:19 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Feb 23 01:36:19 localhost sshd[1146]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:19 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 23 01:36:19 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Feb 23 01:36:19 localhost sshd[1164]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:19 localhost sshd[1171]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:19 localhost sshd[1186]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:19 localhost kdumpctl[1135]: kdump: No kdump initial ramdisk found. Feb 23 01:36:19 localhost kdumpctl[1135]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Feb 23 01:36:19 localhost sshd[1193]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:19 localhost sshd[1199]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:19 localhost sshd[1225]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:19 localhost cloud-init[1260]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Mon, 23 Feb 2026 06:36:19 +0000. Up 11.12 seconds. Feb 23 01:36:19 localhost sshd[1258]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:19 localhost sshd[1274]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:20 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Feb 23 01:36:20 localhost systemd[1]: Starting Execute cloud user/final scripts... Feb 23 01:36:20 localhost dracut[1437]: dracut-057-21.git20230214.el9 Feb 23 01:36:20 localhost cloud-init[1440]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Mon, 23 Feb 2026 06:36:20 +0000. Up 11.49 seconds. Feb 23 01:36:20 localhost cloud-init[1455]: ############################################################# Feb 23 01:36:20 localhost cloud-init[1457]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Feb 23 01:36:20 localhost cloud-init[1461]: 256 SHA256:uJY9wK9HxY3T23ruxlQ4MYCaYA9kPHGCW4EscyNqBrM root@np0005626465.novalocal (ECDSA) Feb 23 01:36:20 localhost cloud-init[1466]: 256 SHA256:nt92WKfAh+tl0nICLgaKG4uXGaqpJtr5P3+hFmifP+U root@np0005626465.novalocal (ED25519) Feb 23 01:36:20 localhost cloud-init[1472]: 3072 SHA256:MEzOX72Dmre+5sG31cVlN5853459haWV4YCHUM0UT8E root@np0005626465.novalocal (RSA) Feb 23 01:36:20 localhost dracut[1439]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Feb 23 01:36:20 localhost cloud-init[1474]: -----END SSH HOST KEY FINGERPRINTS----- Feb 23 01:36:20 localhost cloud-init[1478]: ############################################################# Feb 23 01:36:20 localhost cloud-init[1440]: Cloud-init v. 22.1-9.el9 finished at Mon, 23 Feb 2026 06:36:20 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 11.72 seconds Feb 23 01:36:20 localhost systemd[1]: Reloading Network Manager... Feb 23 01:36:20 localhost dracut[1439]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Feb 23 01:36:20 localhost NetworkManager[788]: [1771828580.6273] audit: op="reload" arg="0" pid=1574 uid=0 result="success" Feb 23 01:36:20 localhost NetworkManager[788]: [1771828580.6285] config: signal: SIGHUP (no changes from disk) Feb 23 01:36:20 localhost dracut[1439]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Feb 23 01:36:20 localhost systemd[1]: Reloaded Network Manager. Feb 23 01:36:20 localhost systemd[1]: Finished Execute cloud user/final scripts. Feb 23 01:36:20 localhost systemd[1]: Reached target Cloud-init target. Feb 23 01:36:20 localhost dracut[1439]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Feb 23 01:36:20 localhost dracut[1439]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Feb 23 01:36:20 localhost dracut[1439]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Feb 23 01:36:20 localhost chronyd[765]: Selected source 167.160.187.179 (2.rhel.pool.ntp.org) Feb 23 01:36:21 localhost chronyd[765]: System clock TAI offset set to 37 seconds Feb 23 01:36:21 localhost dracut[1439]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Feb 23 01:36:21 localhost dracut[1439]: memstrack is not available Feb 23 01:36:21 localhost dracut[1439]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Feb 23 01:36:21 localhost dracut[1439]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Feb 23 01:36:21 localhost dracut[1439]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Feb 23 01:36:21 localhost dracut[1439]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Feb 23 01:36:21 localhost dracut[1439]: memstrack is not available Feb 23 01:36:21 localhost dracut[1439]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Feb 23 01:36:21 localhost dracut[1439]: *** Including module: systemd *** Feb 23 01:36:22 localhost dracut[1439]: *** Including module: systemd-initrd *** Feb 23 01:36:22 localhost dracut[1439]: *** Including module: i18n *** Feb 23 01:36:22 localhost dracut[1439]: No KEYMAP configured. Feb 23 01:36:22 localhost dracut[1439]: *** Including module: drm *** Feb 23 01:36:22 localhost dracut[1439]: *** Including module: prefixdevname *** Feb 23 01:36:22 localhost dracut[1439]: *** Including module: kernel-modules *** Feb 23 01:36:23 localhost dracut[1439]: *** Including module: kernel-modules-extra *** Feb 23 01:36:23 localhost dracut[1439]: *** Including module: qemu *** Feb 23 01:36:23 localhost dracut[1439]: *** Including module: fstab-sys *** Feb 23 01:36:23 localhost dracut[1439]: *** Including module: rootfs-block *** Feb 23 01:36:23 localhost dracut[1439]: *** Including module: terminfo *** Feb 23 01:36:23 localhost dracut[1439]: *** Including module: udev-rules *** Feb 23 01:36:23 localhost dracut[1439]: Skipping udev rule: 91-permissions.rules Feb 23 01:36:23 localhost dracut[1439]: Skipping udev rule: 80-drivers-modprobe.rules Feb 23 01:36:23 localhost dracut[1439]: *** Including module: virtiofs *** Feb 23 01:36:23 localhost dracut[1439]: *** Including module: dracut-systemd *** Feb 23 01:36:23 localhost dracut[1439]: *** Including module: usrmount *** Feb 23 01:36:23 localhost dracut[1439]: *** Including module: base *** Feb 23 01:36:24 localhost dracut[1439]: *** Including module: fs-lib *** Feb 23 01:36:24 localhost dracut[1439]: *** Including module: kdumpbase *** Feb 23 01:36:24 localhost dracut[1439]: *** Including module: microcode_ctl-fw_dir_override *** Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl module: mangling fw_dir Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: configuration "intel" is ignored Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: configuration "intel-06-2d-07" is ignored Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: configuration "intel-06-4e-03" is ignored Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: configuration "intel-06-4f-01" is ignored Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: configuration "intel-06-55-04" is ignored Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: configuration "intel-06-5e-03" is ignored Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: configuration "intel-06-8c-01" is ignored Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Feb 23 01:36:24 localhost dracut[1439]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Feb 23 01:36:24 localhost dracut[1439]: *** Including module: shutdown *** Feb 23 01:36:24 localhost dracut[1439]: *** Including module: squash *** Feb 23 01:36:24 localhost dracut[1439]: *** Including modules done *** Feb 23 01:36:24 localhost dracut[1439]: *** Installing kernel module dependencies *** Feb 23 01:36:25 localhost dracut[1439]: *** Installing kernel module dependencies done *** Feb 23 01:36:25 localhost dracut[1439]: *** Resolving executable dependencies *** Feb 23 01:36:26 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 23 01:36:26 localhost dracut[1439]: *** Resolving executable dependencies done *** Feb 23 01:36:26 localhost dracut[1439]: *** Hardlinking files *** Feb 23 01:36:26 localhost dracut[1439]: Mode: real Feb 23 01:36:26 localhost dracut[1439]: Files: 1099 Feb 23 01:36:26 localhost dracut[1439]: Linked: 3 files Feb 23 01:36:26 localhost dracut[1439]: Compared: 0 xattrs Feb 23 01:36:26 localhost dracut[1439]: Compared: 373 files Feb 23 01:36:26 localhost dracut[1439]: Saved: 61.04 KiB Feb 23 01:36:26 localhost dracut[1439]: Duration: 0.051989 seconds Feb 23 01:36:26 localhost dracut[1439]: *** Hardlinking files done *** Feb 23 01:36:26 localhost dracut[1439]: Could not find 'strip'. Not stripping the initramfs. Feb 23 01:36:26 localhost dracut[1439]: *** Generating early-microcode cpio image *** Feb 23 01:36:26 localhost dracut[1439]: *** Constructing AuthenticAMD.bin *** Feb 23 01:36:26 localhost dracut[1439]: *** Store current command line parameters *** Feb 23 01:36:26 localhost dracut[1439]: Stored kernel commandline: Feb 23 01:36:26 localhost dracut[1439]: No dracut internal kernel commandline stored in the initramfs Feb 23 01:36:27 localhost dracut[1439]: *** Install squash loader *** Feb 23 01:36:27 localhost dracut[1439]: *** Squashing the files inside the initramfs *** Feb 23 01:36:28 localhost dracut[1439]: *** Squashing the files inside the initramfs done *** Feb 23 01:36:28 localhost dracut[1439]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Feb 23 01:36:28 localhost dracut[1439]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Feb 23 01:36:29 localhost kdumpctl[1135]: kdump: kexec: loaded kdump kernel Feb 23 01:36:29 localhost kdumpctl[1135]: kdump: Starting kdump: [OK] Feb 23 01:36:29 localhost systemd[1]: Finished Crash recovery kernel arming. Feb 23 01:36:29 localhost systemd[1]: Startup finished in 1.239s (kernel) + 2.083s (initrd) + 17.253s (userspace) = 20.575s. Feb 23 01:36:32 localhost sshd[4173]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:36:32 localhost systemd[1]: Created slice User Slice of UID 1000. Feb 23 01:36:32 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Feb 23 01:36:32 localhost systemd-logind[759]: New session 1 of user zuul. Feb 23 01:36:32 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Feb 23 01:36:32 localhost systemd[1]: Starting User Manager for UID 1000... Feb 23 01:36:32 localhost systemd[4177]: Queued start job for default target Main User Target. Feb 23 01:36:32 localhost systemd[4177]: Created slice User Application Slice. Feb 23 01:36:32 localhost systemd[4177]: Started Mark boot as successful after the user session has run 2 minutes. Feb 23 01:36:32 localhost systemd[4177]: Started Daily Cleanup of User's Temporary Directories. Feb 23 01:36:32 localhost systemd[4177]: Reached target Paths. Feb 23 01:36:32 localhost systemd[4177]: Reached target Timers. Feb 23 01:36:32 localhost systemd[4177]: Starting D-Bus User Message Bus Socket... Feb 23 01:36:32 localhost systemd[4177]: Starting Create User's Volatile Files and Directories... Feb 23 01:36:32 localhost systemd[4177]: Finished Create User's Volatile Files and Directories. Feb 23 01:36:32 localhost systemd[4177]: Listening on D-Bus User Message Bus Socket. Feb 23 01:36:32 localhost systemd[4177]: Reached target Sockets. Feb 23 01:36:32 localhost systemd[4177]: Reached target Basic System. Feb 23 01:36:32 localhost systemd[4177]: Reached target Main User Target. Feb 23 01:36:32 localhost systemd[4177]: Startup finished in 121ms. Feb 23 01:36:32 localhost systemd[1]: Started User Manager for UID 1000. Feb 23 01:36:32 localhost systemd[1]: Started Session 1 of User zuul. Feb 23 01:36:33 localhost python3[4229]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 01:36:43 localhost python3[4247]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 01:36:45 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 23 01:36:50 localhost python3[4302]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 01:36:51 localhost python3[4332]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Feb 23 01:36:54 localhost python3[4349]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaFiAv9bTisS17GN1FZ7h/VJaLu2YTZdiVc9K45WqX/JhZ8Pwx1BXqBJEGlK+qmwEqEak4GGxIE3mEhiQwiY15E3nfJntSoOu8LhTWltcLr5Uy2mx8nXUZwB3QoNiY4y5uXCriTu6HLCXbdwhnWWT8PgX7V31GosHI0JQEpo4ixoShAJC2sFXL7wd3dTaZBo73qVrUhmekv/2GJo179k6wGblVjOwgB9nkfcuo0acLUaot1Uhc7ZrZM3Nfa7bjrW7OigrLvtNra7bBsjfeTgu6vOxxy1DcTD1xBKab631zTIIugiPViMTGrxgsBpyOc4tJpl1grZCJxm8mBDN25oK2jvP/NwUcC3C9ASWEr9U4QiAOTYN03OAyLGbhq48W3SYHwIi8/awDfJapvA+5rlCq/Xb+Fi/KrAaPxVoEehqWLBzCv0u/ZLZarmOih5rcgNTLMQ3l1/nVNtx+VP3eLtqA5a71JqntFfS80adH/3Px+Wen0lIixRqguGNzrD8ZGcU= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:36:55 localhost python3[4363]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:36:56 localhost python3[4422]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:36:56 localhost python3[4463]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771828616.299343-390-178797928599977/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=75c3b09aedfa4a0eb967a11aba86ff70_id_rsa follow=False checksum=3856428e4c0cdf708f3b02cf6f4769559d121f25 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:36:58 localhost python3[4536]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:36:58 localhost python3[4577]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771828618.3040812-486-114318123979869/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=75c3b09aedfa4a0eb967a11aba86ff70_id_rsa.pub follow=False checksum=24c5085c987d798738c880bb8143c9f9cd19ae33 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:37:27 localhost chronyd[765]: Selected source 23.159.16.194 (2.rhel.pool.ntp.org) Feb 23 01:37:37 localhost python3[4606]: ansible-ping Invoked with data=pong Feb 23 01:37:39 localhost python3[4620]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 01:37:43 localhost python3[4673]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Feb 23 01:37:45 localhost python3[4695]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:37:45 localhost python3[4709]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:37:46 localhost python3[4723]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:37:47 localhost python3[4737]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:37:47 localhost python3[4751]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:37:47 localhost python3[4765]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:37:47 localhost sshd[4766]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:37:50 localhost python3[4782]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:37:51 localhost python3[4830]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:37:51 localhost python3[4873]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771828671.3039117-100-125280034858884/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:37:59 localhost python3[4901]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:37:59 localhost python3[4915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:37:59 localhost python3[4929]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:37:59 localhost python3[4943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:00 localhost python3[4957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:00 localhost python3[4971]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:00 localhost python3[4985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:01 localhost python3[4999]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:01 localhost python3[5013]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:01 localhost python3[5027]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:01 localhost python3[5041]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:02 localhost python3[5055]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:02 localhost python3[5069]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICWBreHW95Wz2Toz5YwCGQwFcUG8oFYkienDh9tntmDc ralfieri@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:02 localhost python3[5083]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:02 localhost python3[5097]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:03 localhost python3[5111]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:03 localhost python3[5125]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:03 localhost python3[5139]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:04 localhost python3[5153]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:04 localhost python3[5167]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:04 localhost python3[5181]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:04 localhost python3[5195]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:05 localhost python3[5209]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:05 localhost python3[5223]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:05 localhost python3[5237]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:05 localhost python3[5251]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:38:07 localhost python3[5267]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Feb 23 01:38:07 localhost systemd[1]: Starting Time & Date Service... Feb 23 01:38:07 localhost systemd[1]: Started Time & Date Service. Feb 23 01:38:07 localhost systemd-timedated[5269]: Changed time zone to 'UTC' (UTC). Feb 23 01:38:09 localhost python3[5288]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:38:10 localhost python3[5334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:38:10 localhost python3[5375]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1771828690.0150166-495-145124653493273/source _original_basename=tmpej9d8k6i follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:38:11 localhost python3[5435]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:38:12 localhost python3[5476]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771828691.5069933-587-110383498845100/source _original_basename=tmpr2j5ks0k follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:38:14 localhost python3[5538]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:38:14 localhost python3[5581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771828693.7440407-731-274925334980279/source _original_basename=tmp1sr7poge follow=False checksum=eb31a54ab353993df0881d335bb57aa163860e42 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:38:15 localhost python3[5610]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:38:15 localhost python3[5626]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:38:16 localhost python3[5676]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:38:17 localhost python3[5719]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1771828696.5775614-856-174371738004178/source _original_basename=tmpzj1ihc35 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:38:28 localhost python3[5750]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-16c2-0802-000000000023-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:38:37 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 23 01:38:39 localhost python3[5770]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-16c2-0802-000000000024-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Feb 23 01:38:41 localhost python3[5788]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:38:55 localhost sshd[5790]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:39:00 localhost python3[5807]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:39:05 localhost systemd[4177]: Starting Mark boot as successful... Feb 23 01:39:05 localhost systemd[4177]: Finished Mark boot as successful. Feb 23 01:39:43 localhost sshd[5810]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:40:00 localhost systemd-logind[759]: Session 1 logged out. Waiting for processes to exit. Feb 23 01:40:10 localhost sshd[5812]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:40:15 localhost systemd[1]: Unmounting EFI System Partition Automount... Feb 23 01:40:15 localhost systemd[1]: efi.mount: Deactivated successfully. Feb 23 01:40:15 localhost systemd[1]: Unmounted EFI System Partition Automount. Feb 23 01:40:21 localhost sshd[5815]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:40:28 localhost sshd[5817]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:41:12 localhost sshd[5819]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:41:39 localhost sshd[5821]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:41:44 localhost sshd[5823]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:41:53 localhost sshd[5825]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:42:05 localhost systemd[4177]: Created slice User Background Tasks Slice. Feb 23 01:42:05 localhost systemd[4177]: Starting Cleanup of User's Temporary Files and Directories... Feb 23 01:42:05 localhost systemd[4177]: Finished Cleanup of User's Temporary Files and Directories. Feb 23 01:42:13 localhost sshd[5828]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:42:33 localhost sshd[5830]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:42:35 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Feb 23 01:42:35 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Feb 23 01:42:35 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Feb 23 01:42:35 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Feb 23 01:42:35 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Feb 23 01:42:35 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Feb 23 01:42:35 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Feb 23 01:42:35 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Feb 23 01:42:35 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Feb 23 01:42:35 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.3838] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Feb 23 01:42:35 localhost systemd-udevd[5832]: Network interface NamePolicy= disabled on kernel command line. Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.3996] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.4027] settings: (eth1): created default wired connection 'Wired connection 1' Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.4031] device (eth1): carrier: link connected Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.4034] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.4038] policy: auto-activating connection 'Wired connection 1' (c0a88c1a-8893-3750-9f03-0783193a5a7e) Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.4042] device (eth1): Activation: starting connection 'Wired connection 1' (c0a88c1a-8893-3750-9f03-0783193a5a7e) Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.4043] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.4047] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.4052] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 01:42:35 localhost NetworkManager[788]: [1771828955.4055] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Feb 23 01:42:35 localhost sshd[5835]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:42:36 localhost systemd-logind[759]: New session 3 of user zuul. Feb 23 01:42:36 localhost systemd[1]: Started Session 3 of User zuul. Feb 23 01:42:36 localhost python3[5852]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-116e-582b-000000000408-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:42:36 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Feb 23 01:42:49 localhost python3[5902]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:42:49 localhost python3[5945]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771828969.1224332-486-4539071778531/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=546e3fd7fcc929e33521f0720ab39edfe18325ef backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:42:50 localhost python3[5975]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 01:42:50 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Feb 23 01:42:50 localhost systemd[1]: Stopped Network Manager Wait Online. Feb 23 01:42:50 localhost systemd[1]: Stopping Network Manager Wait Online... Feb 23 01:42:50 localhost systemd[1]: Stopping Network Manager... Feb 23 01:42:50 localhost NetworkManager[788]: [1771828970.4266] caught SIGTERM, shutting down normally. Feb 23 01:42:50 localhost NetworkManager[788]: [1771828970.4389] dhcp4 (eth0): canceled DHCP transaction Feb 23 01:42:50 localhost NetworkManager[788]: [1771828970.4390] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Feb 23 01:42:50 localhost NetworkManager[788]: [1771828970.4390] dhcp4 (eth0): state changed no lease Feb 23 01:42:50 localhost NetworkManager[788]: [1771828970.4394] manager: NetworkManager state is now CONNECTING Feb 23 01:42:50 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 23 01:42:50 localhost NetworkManager[788]: [1771828970.4510] dhcp4 (eth1): canceled DHCP transaction Feb 23 01:42:50 localhost NetworkManager[788]: [1771828970.4510] dhcp4 (eth1): state changed no lease Feb 23 01:42:50 localhost NetworkManager[788]: [1771828970.4579] exiting (success) Feb 23 01:42:50 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 23 01:42:50 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Feb 23 01:42:50 localhost systemd[1]: Stopped Network Manager. Feb 23 01:42:50 localhost systemd[1]: NetworkManager.service: Consumed 2.227s CPU time. Feb 23 01:42:50 localhost systemd[1]: Starting Network Manager... Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.5124] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:6fe52b5e-8cb4-41cb-a63c-ff39348686f8) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.5127] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Feb 23 01:42:50 localhost systemd[1]: Started Network Manager. Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.5154] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Feb 23 01:42:50 localhost systemd[1]: Starting Network Manager Wait Online... Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.5217] manager[0x55c15535f090]: monitoring kernel firmware directory '/lib/firmware'. Feb 23 01:42:50 localhost systemd[1]: Starting Hostname Service... Feb 23 01:42:50 localhost systemd[1]: Started Hostname Service. Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6053] hostname: hostname: using hostnamed Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6053] hostname: static hostname changed from (none) to "np0005626465.novalocal" Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6061] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6067] manager[0x55c15535f090]: rfkill: Wi-Fi hardware radio set enabled Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6068] manager[0x55c15535f090]: rfkill: WWAN hardware radio set enabled Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6110] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6111] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6112] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6113] manager: Networking is enabled by state file Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6122] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6122] settings: Loaded settings plugin: keyfile (internal) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6168] dhcp: init: Using DHCP client 'internal' Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6173] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6182] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6190] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6203] device (lo): Activation: starting connection 'lo' (b73b4342-6ab2-4a2b-bc6d-8fd4e416493b) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6213] device (eth0): carrier: link connected Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6219] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6227] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6228] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6240] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6251] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6260] device (eth1): carrier: link connected Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6265] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6274] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (c0a88c1a-8893-3750-9f03-0783193a5a7e) (indicated) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6275] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6282] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6294] device (eth1): Activation: starting connection 'Wired connection 1' (c0a88c1a-8893-3750-9f03-0783193a5a7e) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6322] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6327] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6331] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6335] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6340] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6343] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6347] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6352] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6360] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6364] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6393] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6406] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6491] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6496] dhcp4 (eth0): state changed new lease, address=38.102.83.142 Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6508] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6520] device (lo): Activation: successful, device activated. Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6532] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6637] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6704] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6707] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6713] manager: NetworkManager state is now CONNECTED_SITE Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6716] device (eth0): Activation: successful, device activated. Feb 23 01:42:50 localhost NetworkManager[5987]: [1771828970.6723] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 23 01:42:50 localhost python3[6040]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-116e-582b-00000000012b-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:43:00 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 23 01:43:12 localhost sshd[6060]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:43:20 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 23 01:43:35 localhost NetworkManager[5987]: [1771829015.7819] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Feb 23 01:43:35 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 23 01:43:35 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 23 01:43:35 localhost NetworkManager[5987]: [1771829015.8081] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Feb 23 01:43:35 localhost NetworkManager[5987]: [1771829015.8087] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Feb 23 01:43:35 localhost NetworkManager[5987]: [1771829015.8095] device (eth1): Activation: successful, device activated. Feb 23 01:43:35 localhost NetworkManager[5987]: [1771829015.8102] manager: startup complete Feb 23 01:43:35 localhost systemd[1]: Finished Network Manager Wait Online. Feb 23 01:43:45 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 23 01:43:50 localhost systemd[1]: session-3.scope: Deactivated successfully. Feb 23 01:43:50 localhost systemd[1]: session-3.scope: Consumed 1.515s CPU time. Feb 23 01:43:51 localhost systemd-logind[759]: Session 3 logged out. Waiting for processes to exit. Feb 23 01:43:51 localhost systemd-logind[759]: Removed session 3. Feb 23 01:43:54 localhost sshd[6079]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:44:33 localhost sshd[6081]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:44:33 localhost sshd[6083]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:44:33 localhost systemd-logind[759]: New session 4 of user zuul. Feb 23 01:44:33 localhost systemd[1]: Started Session 4 of User zuul. Feb 23 01:44:33 localhost python3[6134]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:44:34 localhost python3[6177]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771829073.3213544-628-43162907411029/source _original_basename=tmpd2063sxf follow=False checksum=393f60ce964bed22379b4d5935087c828e1455a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:44:37 localhost systemd[1]: session-4.scope: Deactivated successfully. Feb 23 01:44:37 localhost systemd-logind[759]: Session 4 logged out. Waiting for processes to exit. Feb 23 01:44:37 localhost systemd-logind[759]: Removed session 4. Feb 23 01:45:13 localhost sshd[6192]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:45:53 localhost sshd[6194]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:46:32 localhost sshd[6196]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:47:09 localhost sshd[6198]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:47:48 localhost sshd[6201]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:48:26 localhost sshd[6203]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:48:50 localhost sshd[6205]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:49:03 localhost sshd[6208]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:49:41 localhost sshd[6210]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:49:52 localhost sshd[6212]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:50:19 localhost sshd[6215]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:50:42 localhost sshd[6217]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:50:54 localhost sshd[6219]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:50:58 localhost sshd[6221]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:51:13 localhost sshd[6225]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:51:13 localhost systemd[1]: Starting Cleanup of Temporary Directories... Feb 23 01:51:13 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 23 01:51:13 localhost systemd[1]: Finished Cleanup of Temporary Directories. Feb 23 01:51:13 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 23 01:51:13 localhost systemd-logind[759]: New session 5 of user zuul. Feb 23 01:51:13 localhost systemd[1]: Started Session 5 of User zuul. Feb 23 01:51:13 localhost python3[6247]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-8ad4-7d7f-00000000219f-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:51:15 localhost python3[6265]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:51:15 localhost python3[6281]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:51:15 localhost python3[6297]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:51:16 localhost python3[6313]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:51:16 localhost python3[6329]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:51:18 localhost python3[6377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:51:18 localhost python3[6420]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771829477.7526534-665-133758270720509/source _original_basename=tmp6zae4vci follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:51:19 localhost python3[6450]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 01:51:19 localhost systemd[1]: Reloading. Feb 23 01:51:20 localhost systemd-rc-local-generator[6469]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 01:51:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 01:51:21 localhost python3[6496]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Feb 23 01:51:22 localhost python3[6512]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:51:23 localhost python3[6530]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:51:23 localhost python3[6548]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:51:23 localhost python3[6566]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:51:34 localhost python3[6584]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-8ad4-7d7f-0000000021a6-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:51:35 localhost python3[6603]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 01:51:37 localhost sshd[6607]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:51:38 localhost systemd[1]: session-5.scope: Deactivated successfully. Feb 23 01:51:38 localhost systemd[1]: session-5.scope: Consumed 4.032s CPU time. Feb 23 01:51:38 localhost systemd-logind[759]: Session 5 logged out. Waiting for processes to exit. Feb 23 01:51:38 localhost systemd-logind[759]: Removed session 5. Feb 23 01:51:57 localhost sshd[6613]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:52:17 localhost sshd[6615]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:52:30 localhost sshd[6619]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:52:30 localhost systemd-logind[759]: New session 6 of user zuul. Feb 23 01:52:30 localhost systemd[1]: Started Session 6 of User zuul. Feb 23 01:52:31 localhost systemd[1]: Starting RHSM dbus service... Feb 23 01:52:31 localhost systemd[1]: Started RHSM dbus service. Feb 23 01:52:31 localhost rhsm-service[6643]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Feb 23 01:52:31 localhost rhsm-service[6643]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Feb 23 01:52:31 localhost rhsm-service[6643]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Feb 23 01:52:31 localhost rhsm-service[6643]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Feb 23 01:52:33 localhost rhsm-service[6643]: INFO [subscription_manager.managerlib:90] Consumer created: np0005626465.novalocal (35b0bb60-e384-493b-9b9e-d9a7ed966780) Feb 23 01:52:33 localhost subscription-manager[6643]: Registered system with identity: 35b0bb60-e384-493b-9b9e-d9a7ed966780 Feb 23 01:52:34 localhost rhsm-service[6643]: INFO [subscription_manager.entcertlib:131] certs updated: Feb 23 01:52:34 localhost rhsm-service[6643]: Total updates: 1 Feb 23 01:52:34 localhost rhsm-service[6643]: Found (local) serial# [] Feb 23 01:52:34 localhost rhsm-service[6643]: Expected (UEP) serial# [6357395491514640052] Feb 23 01:52:34 localhost rhsm-service[6643]: Added (new) Feb 23 01:52:34 localhost rhsm-service[6643]: [sn:6357395491514640052 ( Content Access,) @ /etc/pki/entitlement/6357395491514640052.pem] Feb 23 01:52:34 localhost rhsm-service[6643]: Deleted (rogue): Feb 23 01:52:34 localhost rhsm-service[6643]: Feb 23 01:52:34 localhost subscription-manager[6643]: Added subscription for 'Content Access' contract 'None' Feb 23 01:52:34 localhost subscription-manager[6643]: Added subscription for product ' Content Access' Feb 23 01:52:35 localhost rhsm-service[6643]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Feb 23 01:52:35 localhost rhsm-service[6643]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Feb 23 01:52:35 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 01:52:35 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 01:52:35 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 01:52:35 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 01:52:35 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 01:52:43 localhost python3[6735]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-32b8-b7f7-00000000000d-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:52:45 localhost python3[6754]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 01:52:57 localhost sshd[6765]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:52:58 localhost sshd[6767]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:53:15 localhost setsebool[6833]: The virt_use_nfs policy boolean was changed to 1 by root Feb 23 01:53:15 localhost setsebool[6833]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Feb 23 01:53:25 localhost kernel: SELinux: Converting 406 SID table entries... Feb 23 01:53:25 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 01:53:25 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 01:53:25 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 01:53:25 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 01:53:25 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 01:53:25 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 01:53:25 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 01:53:36 localhost sshd[7578]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:53:38 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=3 res=1 Feb 23 01:53:38 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 01:53:38 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 01:53:38 localhost systemd[1]: Reloading. Feb 23 01:53:38 localhost systemd-rc-local-generator[7677]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 01:53:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 01:53:38 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 01:53:39 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 01:53:39 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 01:53:47 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 01:53:47 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 01:53:47 localhost systemd[1]: man-db-cache-update.service: Consumed 10.614s CPU time. Feb 23 01:53:47 localhost systemd[1]: run-r0a60f0c8df494367a634dd14ef724172.service: Deactivated successfully. Feb 23 01:53:57 localhost sshd[18409]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:54:17 localhost sshd[18411]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:54:30 localhost podman[18429]: 2026-02-23 06:54:30.102326353 +0000 UTC m=+0.102546377 system refresh Feb 23 01:54:30 localhost systemd[4177]: Starting D-Bus User Message Bus... Feb 23 01:54:30 localhost dbus-broker-launch[18488]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Feb 23 01:54:30 localhost dbus-broker-launch[18488]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Feb 23 01:54:30 localhost systemd[4177]: Started D-Bus User Message Bus. Feb 23 01:54:30 localhost journal[18488]: Ready Feb 23 01:54:30 localhost systemd[4177]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Feb 23 01:54:30 localhost systemd[4177]: Created slice Slice /user. Feb 23 01:54:30 localhost systemd[4177]: podman-18470.scope: unit configures an IP firewall, but not running as root. Feb 23 01:54:30 localhost systemd[4177]: (This warning is only shown for the first unit using IP firewalling.) Feb 23 01:54:30 localhost systemd[4177]: Started podman-18470.scope. Feb 23 01:54:31 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 01:54:31 localhost systemd[4177]: Started podman-pause-25f837ac.scope. Feb 23 01:54:33 localhost systemd[1]: session-6.scope: Deactivated successfully. Feb 23 01:54:33 localhost systemd[1]: session-6.scope: Consumed 52.096s CPU time. Feb 23 01:54:33 localhost systemd-logind[759]: Session 6 logged out. Waiting for processes to exit. Feb 23 01:54:33 localhost systemd-logind[759]: Removed session 6. Feb 23 01:54:48 localhost sshd[18495]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:54:48 localhost sshd[18492]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:54:48 localhost sshd[18493]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:54:48 localhost sshd[18491]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:54:48 localhost sshd[18494]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:54:52 localhost sshd[18501]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:54:52 localhost sshd[18503]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:54:52 localhost systemd-logind[759]: New session 7 of user zuul. Feb 23 01:54:52 localhost systemd[1]: Started Session 7 of User zuul. Feb 23 01:54:53 localhost python3[18520]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD0suk+oGhrLCF0TQEPuL+1TMMXZ4ZyjwmaIk09J9Zppa5UYl2p4E22RKwDBWJVKjp5+lVBFxSdpKjyFnuMgKyY= zuul@np0005626456.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:54:54 localhost python3[18536]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD0suk+oGhrLCF0TQEPuL+1TMMXZ4ZyjwmaIk09J9Zppa5UYl2p4E22RKwDBWJVKjp5+lVBFxSdpKjyFnuMgKyY= zuul@np0005626456.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:54:54 localhost sshd[18537]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:54:55 localhost systemd[1]: session-7.scope: Deactivated successfully. Feb 23 01:54:55 localhost systemd-logind[759]: Session 7 logged out. Waiting for processes to exit. Feb 23 01:54:55 localhost systemd-logind[759]: Removed session 7. Feb 23 01:55:32 localhost sshd[18539]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:55:47 localhost sshd[18541]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:56:11 localhost sshd[18544]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:56:13 localhost sshd[18546]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:56:13 localhost systemd-logind[759]: New session 8 of user zuul. Feb 23 01:56:13 localhost systemd[1]: Started Session 8 of User zuul. Feb 23 01:56:13 localhost python3[18565]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaFiAv9bTisS17GN1FZ7h/VJaLu2YTZdiVc9K45WqX/JhZ8Pwx1BXqBJEGlK+qmwEqEak4GGxIE3mEhiQwiY15E3nfJntSoOu8LhTWltcLr5Uy2mx8nXUZwB3QoNiY4y5uXCriTu6HLCXbdwhnWWT8PgX7V31GosHI0JQEpo4ixoShAJC2sFXL7wd3dTaZBo73qVrUhmekv/2GJo179k6wGblVjOwgB9nkfcuo0acLUaot1Uhc7ZrZM3Nfa7bjrW7OigrLvtNra7bBsjfeTgu6vOxxy1DcTD1xBKab631zTIIugiPViMTGrxgsBpyOc4tJpl1grZCJxm8mBDN25oK2jvP/NwUcC3C9ASWEr9U4QiAOTYN03OAyLGbhq48W3SYHwIi8/awDfJapvA+5rlCq/Xb+Fi/KrAaPxVoEehqWLBzCv0u/ZLZarmOih5rcgNTLMQ3l1/nVNtx+VP3eLtqA5a71JqntFfS80adH/3Px+Wen0lIixRqguGNzrD8ZGcU= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 01:56:14 localhost python3[18581]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005626465.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 23 01:56:16 localhost python3[18631]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:56:16 localhost python3[18674]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771829776.010648-137-274403536543988/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=75c3b09aedfa4a0eb967a11aba86ff70_id_rsa follow=False checksum=3856428e4c0cdf708f3b02cf6f4769559d121f25 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:56:17 localhost python3[18736]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:56:18 localhost python3[18779]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771829777.6534204-227-263341184598237/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=75c3b09aedfa4a0eb967a11aba86ff70_id_rsa.pub follow=False checksum=24c5085c987d798738c880bb8143c9f9cd19ae33 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:56:20 localhost python3[18809]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:56:21 localhost python3[18855]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:56:21 localhost python3[18871]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpr2spkmaw recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:56:22 localhost python3[18931]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:56:22 localhost python3[18947]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmps5utzxko recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:56:24 localhost python3[19007]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 01:56:24 localhost python3[19023]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpqvds0vsm recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 01:56:25 localhost systemd[1]: session-8.scope: Deactivated successfully. Feb 23 01:56:25 localhost systemd[1]: session-8.scope: Consumed 3.588s CPU time. Feb 23 01:56:25 localhost systemd-logind[759]: Session 8 logged out. Waiting for processes to exit. Feb 23 01:56:25 localhost systemd-logind[759]: Removed session 8. Feb 23 01:56:43 localhost sshd[19038]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:56:49 localhost sshd[19040]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:57:28 localhost sshd[19043]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:57:40 localhost sshd[19045]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:58:07 localhost sshd[19048]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:58:26 localhost sshd[19050]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:58:27 localhost systemd-logind[759]: New session 9 of user zuul. Feb 23 01:58:27 localhost systemd[1]: Started Session 9 of User zuul. Feb 23 01:58:27 localhost python3[19096]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 01:58:36 localhost sshd[19098]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:58:47 localhost sshd[19100]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:59:12 localhost sshd[19102]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:59:25 localhost sshd[19104]: main: sshd: ssh-rsa algorithm is disabled Feb 23 01:59:32 localhost sshd[19106]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:00:03 localhost sshd[19108]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:00:27 localhost sshd[19110]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:00:41 localhost sshd[19112]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:01:06 localhost sshd[19129]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:01:18 localhost sshd[19131]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:01:22 localhost sshd[19133]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:01:54 localhost sshd[19135]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:02:16 localhost sshd[19138]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:02:30 localhost sshd[19141]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:02:47 localhost sshd[19143]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:03:08 localhost sshd[19145]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:03:13 localhost sshd[19147]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:03:26 localhost systemd[1]: session-9.scope: Deactivated successfully. Feb 23 02:03:26 localhost systemd-logind[759]: Session 9 logged out. Waiting for processes to exit. Feb 23 02:03:26 localhost systemd-logind[759]: Removed session 9. Feb 23 02:03:48 localhost sshd[19150]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:04:12 localhost sshd[19152]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:04:29 localhost sshd[19154]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:04:47 localhost systemd[1]: Starting dnf makecache... Feb 23 02:04:47 localhost dnf[19156]: Updating Subscription Management repositories. Feb 23 02:04:49 localhost dnf[19156]: Failed determining last makecache time. Feb 23 02:04:50 localhost dnf[19156]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 8.7 kB/s | 4.1 kB 00:00 Feb 23 02:04:50 localhost dnf[19156]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 30 kB/s | 4.5 kB 00:00 Feb 23 02:04:50 localhost dnf[19156]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 33 kB/s | 4.5 kB 00:00 Feb 23 02:04:50 localhost dnf[19156]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 26 kB/s | 4.1 kB 00:00 Feb 23 02:04:51 localhost dnf[19156]: Metadata cache created. Feb 23 02:04:51 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Feb 23 02:04:51 localhost systemd[1]: Finished dnf makecache. Feb 23 02:04:51 localhost systemd[1]: dnf-makecache.service: Consumed 2.773s CPU time. Feb 23 02:05:10 localhost sshd[19161]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:05:12 localhost sshd[19163]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:05:49 localhost sshd[19165]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:06:11 localhost sshd[19167]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:06:26 localhost sshd[19169]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:07:04 localhost sshd[19171]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:07:06 localhost sshd[19173]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:07:40 localhost sshd[19175]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:07:59 localhost sshd[19178]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:08:17 localhost sshd[19180]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:08:52 localhost sshd[19182]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:08:53 localhost sshd[19184]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:09:30 localhost sshd[19187]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:09:47 localhost sshd[19189]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:09:57 localhost sshd[19192]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:09:57 localhost systemd-logind[759]: New session 10 of user zuul. Feb 23 02:09:57 localhost systemd[1]: Started Session 10 of User zuul. Feb 23 02:09:57 localhost python3[19209]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-669c-02d2-00000000000c-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:09:59 localhost python3[19229]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ef9-e89a-669c-02d2-00000000000d-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:10:04 localhost python3[19249]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Feb 23 02:10:08 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:10:08 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:10:09 localhost sshd[19318]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:10:41 localhost sshd[19393]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:10:48 localhost sshd[19395]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:11:04 localhost python3[19414]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Feb 23 02:11:07 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:11:15 localhost python3[19613]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Feb 23 02:11:16 localhost sshd[19616]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:11:17 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:11:17 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:11:22 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:11:22 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:11:29 localhost sshd[19935]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:11:37 localhost sshd[19937]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:11:44 localhost python3[19954]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Feb 23 02:11:46 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:11:46 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:11:51 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:11:52 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:12:06 localhost sshd[20275]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:12:13 localhost python3[20292]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Feb 23 02:12:15 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:12:15 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:12:20 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:12:31 localhost sshd[20614]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:12:42 localhost python3[20632]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-669c-02d2-000000000013-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:12:44 localhost sshd[20635]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:12:47 localhost python3[20653]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:13:18 localhost kernel: SELinux: Converting 490 SID table entries... Feb 23 02:13:18 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 02:13:18 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 02:13:18 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 02:13:18 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 02:13:18 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 02:13:18 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 02:13:18 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 02:13:19 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=4 res=1 Feb 23 02:13:19 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Feb 23 02:13:22 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 02:13:22 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 02:13:22 localhost systemd[1]: Reloading. Feb 23 02:13:22 localhost systemd-sysv-generator[21319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:13:22 localhost systemd-rc-local-generator[21314]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:13:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:13:22 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 02:13:23 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 02:13:23 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 02:13:23 localhost systemd[1]: run-r3a472d8539b641088c93b6bcabc8f87d.service: Deactivated successfully. Feb 23 02:13:23 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:13:23 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 02:13:24 localhost sshd[21971]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:13:26 localhost sshd[21973]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:13:49 localhost python3[21991]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-669c-02d2-000000000015-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:14:01 localhost sshd[21995]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:14:20 localhost sshd[21998]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:14:21 localhost python3[22015]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:14:22 localhost python3[22063]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:14:23 localhost python3[22106]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771830862.3497863-294-121689140200404/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=9333f42ac4b9baf349a5c32f7bcba3335b5912e0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:14:24 localhost python3[22136]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 23 02:14:24 localhost systemd-journald[618]: Field hash table of /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation. Feb 23 02:14:24 localhost systemd-journald[618]: /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 02:14:24 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 02:14:24 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 02:14:24 localhost python3[22157]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 23 02:14:25 localhost python3[22177]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 23 02:14:25 localhost python3[22197]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 23 02:14:25 localhost python3[22217]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 23 02:14:28 localhost python3[22237]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 02:14:28 localhost systemd[1]: Starting LSB: Bring up/down networking... Feb 23 02:14:28 localhost network[22240]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 02:14:28 localhost network[22251]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 02:14:28 localhost network[22240]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:28 localhost network[22252]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:28 localhost network[22240]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Feb 23 02:14:28 localhost network[22253]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 02:14:28 localhost NetworkManager[5987]: [1771830868.3020] audit: op="connections-reload" pid=22281 uid=0 result="success" Feb 23 02:14:28 localhost network[22240]: Bringing up loopback interface: [ OK ] Feb 23 02:14:28 localhost NetworkManager[5987]: [1771830868.4805] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22370 uid=0 result="success" Feb 23 02:14:28 localhost network[22240]: Bringing up interface eth0: [ OK ] Feb 23 02:14:28 localhost systemd[1]: Started LSB: Bring up/down networking. Feb 23 02:14:28 localhost python3[22411]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 02:14:29 localhost systemd[1]: Starting Open vSwitch Database Unit... Feb 23 02:14:29 localhost chown[22415]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Feb 23 02:14:29 localhost ovs-ctl[22420]: /etc/openvswitch/conf.db does not exist ... (warning). Feb 23 02:14:29 localhost ovs-ctl[22420]: Creating empty database /etc/openvswitch/conf.db [ OK ] Feb 23 02:14:29 localhost ovs-ctl[22420]: Starting ovsdb-server [ OK ] Feb 23 02:14:29 localhost ovs-vsctl[22469]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Feb 23 02:14:29 localhost ovs-vsctl[22489]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"a05de4d1-e729-4c33-bedf-496279b1b686\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Feb 23 02:14:29 localhost ovs-ctl[22420]: Configuring Open vSwitch system IDs [ OK ] Feb 23 02:14:29 localhost ovs-ctl[22420]: Enabling remote OVSDB managers [ OK ] Feb 23 02:14:29 localhost systemd[1]: Started Open vSwitch Database Unit. Feb 23 02:14:29 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Feb 23 02:14:29 localhost ovs-vsctl[22495]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005626465.novalocal Feb 23 02:14:29 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Feb 23 02:14:29 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Feb 23 02:14:29 localhost kernel: openvswitch: Open vSwitch switching datapath Feb 23 02:14:29 localhost ovs-ctl[22539]: Inserting openvswitch module [ OK ] Feb 23 02:14:29 localhost ovs-ctl[22508]: Starting ovs-vswitchd [ OK ] Feb 23 02:14:29 localhost ovs-vsctl[22558]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005626465.novalocal Feb 23 02:14:29 localhost ovs-ctl[22508]: Enabling remote OVSDB managers [ OK ] Feb 23 02:14:29 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Feb 23 02:14:29 localhost systemd[1]: Starting Open vSwitch... Feb 23 02:14:29 localhost systemd[1]: Finished Open vSwitch. Feb 23 02:14:33 localhost python3[22577]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-669c-02d2-00000000001a-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:14:34 localhost NetworkManager[5987]: [1771830874.4990] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22773 uid=0 result="success" Feb 23 02:14:34 localhost ifup[22774]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:34 localhost ifup[22775]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:34 localhost ifup[22776]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:34 localhost NetworkManager[5987]: [1771830874.5166] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22782 uid=0 result="success" Feb 23 02:14:34 localhost ovs-vsctl[22784]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:11:f4:23 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Feb 23 02:14:34 localhost kernel: device ovs-system entered promiscuous mode Feb 23 02:14:34 localhost NetworkManager[5987]: [1771830874.5342] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Feb 23 02:14:34 localhost systemd-udevd[22786]: Network interface NamePolicy= disabled on kernel command line. Feb 23 02:14:34 localhost kernel: Timeout policy base is empty Feb 23 02:14:34 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Feb 23 02:14:34 localhost kernel: device br-ex entered promiscuous mode Feb 23 02:14:34 localhost NetworkManager[5987]: [1771830874.5769] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Feb 23 02:14:34 localhost NetworkManager[5987]: [1771830874.6000] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22811 uid=0 result="success" Feb 23 02:14:34 localhost NetworkManager[5987]: [1771830874.6180] device (br-ex): carrier: link connected Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.6660] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22840 uid=0 result="success" Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.7087] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22855 uid=0 result="success" Feb 23 02:14:37 localhost NET[22880]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.7912] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.7999] dhcp4 (eth1): canceled DHCP transaction Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.7999] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.7999] dhcp4 (eth1): state changed no lease Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.8035] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22889 uid=0 result="success" Feb 23 02:14:37 localhost ifup[22890]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:37 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 23 02:14:37 localhost ifup[22892]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:37 localhost ifup[22893]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:37 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.8392] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22907 uid=0 result="success" Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.8826] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22917 uid=0 result="success" Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.8889] device (eth1): carrier: link connected Feb 23 02:14:37 localhost NetworkManager[5987]: [1771830877.9140] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22926 uid=0 result="success" Feb 23 02:14:37 localhost ipv6_wait_tentative[22938]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Feb 23 02:14:38 localhost sshd[22940]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:14:38 localhost ipv6_wait_tentative[22945]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Feb 23 02:14:39 localhost NetworkManager[5987]: [1771830879.9877] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22955 uid=0 result="success" Feb 23 02:14:40 localhost ovs-vsctl[22970]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Feb 23 02:14:40 localhost kernel: device eth1 entered promiscuous mode Feb 23 02:14:40 localhost NetworkManager[5987]: [1771830880.0584] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22977 uid=0 result="success" Feb 23 02:14:40 localhost ifup[22978]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:40 localhost ifup[22979]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:40 localhost ifup[22980]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:40 localhost NetworkManager[5987]: [1771830880.0903] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22986 uid=0 result="success" Feb 23 02:14:40 localhost NetworkManager[5987]: [1771830880.1304] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22996 uid=0 result="success" Feb 23 02:14:40 localhost ifup[22997]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:40 localhost ifup[22998]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:40 localhost ifup[22999]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:40 localhost NetworkManager[5987]: [1771830880.1614] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23005 uid=0 result="success" Feb 23 02:14:40 localhost ovs-vsctl[23008]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Feb 23 02:14:40 localhost kernel: device vlan23 entered promiscuous mode Feb 23 02:14:40 localhost NetworkManager[5987]: [1771830880.2166] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Feb 23 02:14:40 localhost systemd-udevd[23010]: Network interface NamePolicy= disabled on kernel command line. Feb 23 02:14:40 localhost NetworkManager[5987]: [1771830880.2424] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23019 uid=0 result="success" Feb 23 02:14:40 localhost NetworkManager[5987]: [1771830880.2626] device (vlan23): carrier: link connected Feb 23 02:14:43 localhost NetworkManager[5987]: [1771830883.3152] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23048 uid=0 result="success" Feb 23 02:14:43 localhost NetworkManager[5987]: [1771830883.3634] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23063 uid=0 result="success" Feb 23 02:14:43 localhost NetworkManager[5987]: [1771830883.4257] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23084 uid=0 result="success" Feb 23 02:14:43 localhost ifup[23085]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:43 localhost ifup[23086]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:43 localhost ifup[23087]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:43 localhost NetworkManager[5987]: [1771830883.4604] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23093 uid=0 result="success" Feb 23 02:14:43 localhost ovs-vsctl[23096]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Feb 23 02:14:43 localhost kernel: device vlan22 entered promiscuous mode Feb 23 02:14:43 localhost NetworkManager[5987]: [1771830883.5045] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Feb 23 02:14:43 localhost systemd-udevd[23099]: Network interface NamePolicy= disabled on kernel command line. Feb 23 02:14:43 localhost NetworkManager[5987]: [1771830883.5305] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23108 uid=0 result="success" Feb 23 02:14:43 localhost NetworkManager[5987]: [1771830883.5528] device (vlan22): carrier: link connected Feb 23 02:14:46 localhost NetworkManager[5987]: [1771830886.6132] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23138 uid=0 result="success" Feb 23 02:14:46 localhost NetworkManager[5987]: [1771830886.6605] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23153 uid=0 result="success" Feb 23 02:14:46 localhost NetworkManager[5987]: [1771830886.7258] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23174 uid=0 result="success" Feb 23 02:14:46 localhost ifup[23175]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:46 localhost ifup[23176]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:46 localhost ifup[23177]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:46 localhost NetworkManager[5987]: [1771830886.7564] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23183 uid=0 result="success" Feb 23 02:14:46 localhost ovs-vsctl[23186]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Feb 23 02:14:46 localhost kernel: device vlan44 entered promiscuous mode Feb 23 02:14:46 localhost systemd-udevd[23188]: Network interface NamePolicy= disabled on kernel command line. Feb 23 02:14:46 localhost NetworkManager[5987]: [1771830886.7955] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Feb 23 02:14:46 localhost NetworkManager[5987]: [1771830886.8169] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23198 uid=0 result="success" Feb 23 02:14:46 localhost NetworkManager[5987]: [1771830886.8356] device (vlan44): carrier: link connected Feb 23 02:14:47 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 23 02:14:49 localhost NetworkManager[5987]: [1771830889.8978] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23228 uid=0 result="success" Feb 23 02:14:49 localhost NetworkManager[5987]: [1771830889.9450] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23243 uid=0 result="success" Feb 23 02:14:50 localhost NetworkManager[5987]: [1771830890.0084] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23264 uid=0 result="success" Feb 23 02:14:50 localhost ifup[23265]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:50 localhost ifup[23266]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:50 localhost ifup[23267]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:50 localhost NetworkManager[5987]: [1771830890.0412] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23273 uid=0 result="success" Feb 23 02:14:50 localhost ovs-vsctl[23276]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Feb 23 02:14:50 localhost systemd-udevd[23278]: Network interface NamePolicy= disabled on kernel command line. Feb 23 02:14:50 localhost kernel: device vlan21 entered promiscuous mode Feb 23 02:14:50 localhost NetworkManager[5987]: [1771830890.0864] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Feb 23 02:14:50 localhost NetworkManager[5987]: [1771830890.1118] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23288 uid=0 result="success" Feb 23 02:14:50 localhost NetworkManager[5987]: [1771830890.1322] device (vlan21): carrier: link connected Feb 23 02:14:53 localhost NetworkManager[5987]: [1771830893.1848] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23318 uid=0 result="success" Feb 23 02:14:53 localhost NetworkManager[5987]: [1771830893.2316] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23333 uid=0 result="success" Feb 23 02:14:53 localhost NetworkManager[5987]: [1771830893.2878] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23354 uid=0 result="success" Feb 23 02:14:53 localhost ifup[23355]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:53 localhost ifup[23356]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:53 localhost ifup[23357]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:53 localhost NetworkManager[5987]: [1771830893.3192] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23363 uid=0 result="success" Feb 23 02:14:53 localhost ovs-vsctl[23366]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Feb 23 02:14:53 localhost kernel: device vlan20 entered promiscuous mode Feb 23 02:14:53 localhost systemd-udevd[23368]: Network interface NamePolicy= disabled on kernel command line. Feb 23 02:14:53 localhost NetworkManager[5987]: [1771830893.3605] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Feb 23 02:14:53 localhost NetworkManager[5987]: [1771830893.3867] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23378 uid=0 result="success" Feb 23 02:14:53 localhost NetworkManager[5987]: [1771830893.4084] device (vlan20): carrier: link connected Feb 23 02:14:56 localhost NetworkManager[5987]: [1771830896.4622] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23408 uid=0 result="success" Feb 23 02:14:56 localhost NetworkManager[5987]: [1771830896.5041] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23423 uid=0 result="success" Feb 23 02:14:56 localhost NetworkManager[5987]: [1771830896.5527] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23444 uid=0 result="success" Feb 23 02:14:56 localhost ifup[23445]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:56 localhost ifup[23446]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:56 localhost ifup[23447]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:56 localhost NetworkManager[5987]: [1771830896.5846] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23453 uid=0 result="success" Feb 23 02:14:56 localhost ovs-vsctl[23456]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Feb 23 02:14:56 localhost NetworkManager[5987]: [1771830896.6434] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23463 uid=0 result="success" Feb 23 02:14:57 localhost NetworkManager[5987]: [1771830897.6950] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23490 uid=0 result="success" Feb 23 02:14:57 localhost NetworkManager[5987]: [1771830897.7310] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23505 uid=0 result="success" Feb 23 02:14:57 localhost NetworkManager[5987]: [1771830897.7839] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23526 uid=0 result="success" Feb 23 02:14:57 localhost ifup[23527]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:57 localhost ifup[23528]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:57 localhost ifup[23529]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:57 localhost NetworkManager[5987]: [1771830897.8062] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23535 uid=0 result="success" Feb 23 02:14:57 localhost ovs-vsctl[23538]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Feb 23 02:14:57 localhost NetworkManager[5987]: [1771830897.8556] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23545 uid=0 result="success" Feb 23 02:14:58 localhost NetworkManager[5987]: [1771830898.9170] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23573 uid=0 result="success" Feb 23 02:14:58 localhost NetworkManager[5987]: [1771830898.9644] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23588 uid=0 result="success" Feb 23 02:14:59 localhost NetworkManager[5987]: [1771830899.0162] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23609 uid=0 result="success" Feb 23 02:14:59 localhost ifup[23610]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:14:59 localhost ifup[23611]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:14:59 localhost ifup[23612]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:14:59 localhost NetworkManager[5987]: [1771830899.0473] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23618 uid=0 result="success" Feb 23 02:14:59 localhost ovs-vsctl[23621]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Feb 23 02:14:59 localhost NetworkManager[5987]: [1771830899.1027] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23628 uid=0 result="success" Feb 23 02:15:00 localhost NetworkManager[5987]: [1771830900.1612] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23656 uid=0 result="success" Feb 23 02:15:00 localhost NetworkManager[5987]: [1771830900.2099] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23671 uid=0 result="success" Feb 23 02:15:00 localhost NetworkManager[5987]: [1771830900.2697] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23692 uid=0 result="success" Feb 23 02:15:00 localhost ifup[23693]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:15:00 localhost ifup[23694]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:15:00 localhost ifup[23695]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:15:00 localhost NetworkManager[5987]: [1771830900.3021] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23701 uid=0 result="success" Feb 23 02:15:00 localhost ovs-vsctl[23704]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Feb 23 02:15:00 localhost NetworkManager[5987]: [1771830900.3618] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23711 uid=0 result="success" Feb 23 02:15:01 localhost NetworkManager[5987]: [1771830901.4208] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23739 uid=0 result="success" Feb 23 02:15:01 localhost NetworkManager[5987]: [1771830901.4671] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23754 uid=0 result="success" Feb 23 02:15:01 localhost NetworkManager[5987]: [1771830901.5235] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23775 uid=0 result="success" Feb 23 02:15:01 localhost ifup[23776]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 23 02:15:01 localhost ifup[23777]: 'network-scripts' will be removed from distribution in near future. Feb 23 02:15:01 localhost ifup[23778]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 23 02:15:01 localhost NetworkManager[5987]: [1771830901.5568] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23784 uid=0 result="success" Feb 23 02:15:01 localhost ovs-vsctl[23787]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Feb 23 02:15:01 localhost NetworkManager[5987]: [1771830901.6141] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23794 uid=0 result="success" Feb 23 02:15:02 localhost NetworkManager[5987]: [1771830902.6705] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23822 uid=0 result="success" Feb 23 02:15:02 localhost NetworkManager[5987]: [1771830902.7161] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23837 uid=0 result="success" Feb 23 02:15:16 localhost sshd[23855]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:15:19 localhost sshd[23857]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:15:55 localhost python3[23873]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-669c-02d2-00000000001b-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:15:59 localhost sshd[23879]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:16:01 localhost python3[23894]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaFiAv9bTisS17GN1FZ7h/VJaLu2YTZdiVc9K45WqX/JhZ8Pwx1BXqBJEGlK+qmwEqEak4GGxIE3mEhiQwiY15E3nfJntSoOu8LhTWltcLr5Uy2mx8nXUZwB3QoNiY4y5uXCriTu6HLCXbdwhnWWT8PgX7V31GosHI0JQEpo4ixoShAJC2sFXL7wd3dTaZBo73qVrUhmekv/2GJo179k6wGblVjOwgB9nkfcuo0acLUaot1Uhc7ZrZM3Nfa7bjrW7OigrLvtNra7bBsjfeTgu6vOxxy1DcTD1xBKab631zTIIugiPViMTGrxgsBpyOc4tJpl1grZCJxm8mBDN25oK2jvP/NwUcC3C9ASWEr9U4QiAOTYN03OAyLGbhq48W3SYHwIi8/awDfJapvA+5rlCq/Xb+Fi/KrAaPxVoEehqWLBzCv0u/ZLZarmOih5rcgNTLMQ3l1/nVNtx+VP3eLtqA5a71JqntFfS80adH/3Px+Wen0lIixRqguGNzrD8ZGcU= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 02:16:01 localhost python3[23910]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaFiAv9bTisS17GN1FZ7h/VJaLu2YTZdiVc9K45WqX/JhZ8Pwx1BXqBJEGlK+qmwEqEak4GGxIE3mEhiQwiY15E3nfJntSoOu8LhTWltcLr5Uy2mx8nXUZwB3QoNiY4y5uXCriTu6HLCXbdwhnWWT8PgX7V31GosHI0JQEpo4ixoShAJC2sFXL7wd3dTaZBo73qVrUhmekv/2GJo179k6wGblVjOwgB9nkfcuo0acLUaot1Uhc7ZrZM3Nfa7bjrW7OigrLvtNra7bBsjfeTgu6vOxxy1DcTD1xBKab631zTIIugiPViMTGrxgsBpyOc4tJpl1grZCJxm8mBDN25oK2jvP/NwUcC3C9ASWEr9U4QiAOTYN03OAyLGbhq48W3SYHwIi8/awDfJapvA+5rlCq/Xb+Fi/KrAaPxVoEehqWLBzCv0u/ZLZarmOih5rcgNTLMQ3l1/nVNtx+VP3eLtqA5a71JqntFfS80adH/3Px+Wen0lIixRqguGNzrD8ZGcU= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 02:16:03 localhost python3[23924]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaFiAv9bTisS17GN1FZ7h/VJaLu2YTZdiVc9K45WqX/JhZ8Pwx1BXqBJEGlK+qmwEqEak4GGxIE3mEhiQwiY15E3nfJntSoOu8LhTWltcLr5Uy2mx8nXUZwB3QoNiY4y5uXCriTu6HLCXbdwhnWWT8PgX7V31GosHI0JQEpo4ixoShAJC2sFXL7wd3dTaZBo73qVrUhmekv/2GJo179k6wGblVjOwgB9nkfcuo0acLUaot1Uhc7ZrZM3Nfa7bjrW7OigrLvtNra7bBsjfeTgu6vOxxy1DcTD1xBKab631zTIIugiPViMTGrxgsBpyOc4tJpl1grZCJxm8mBDN25oK2jvP/NwUcC3C9ASWEr9U4QiAOTYN03OAyLGbhq48W3SYHwIi8/awDfJapvA+5rlCq/Xb+Fi/KrAaPxVoEehqWLBzCv0u/ZLZarmOih5rcgNTLMQ3l1/nVNtx+VP3eLtqA5a71JqntFfS80adH/3Px+Wen0lIixRqguGNzrD8ZGcU= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 02:16:03 localhost python3[23940]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaFiAv9bTisS17GN1FZ7h/VJaLu2YTZdiVc9K45WqX/JhZ8Pwx1BXqBJEGlK+qmwEqEak4GGxIE3mEhiQwiY15E3nfJntSoOu8LhTWltcLr5Uy2mx8nXUZwB3QoNiY4y5uXCriTu6HLCXbdwhnWWT8PgX7V31GosHI0JQEpo4ixoShAJC2sFXL7wd3dTaZBo73qVrUhmekv/2GJo179k6wGblVjOwgB9nkfcuo0acLUaot1Uhc7ZrZM3Nfa7bjrW7OigrLvtNra7bBsjfeTgu6vOxxy1DcTD1xBKab631zTIIugiPViMTGrxgsBpyOc4tJpl1grZCJxm8mBDN25oK2jvP/NwUcC3C9ASWEr9U4QiAOTYN03OAyLGbhq48W3SYHwIi8/awDfJapvA+5rlCq/Xb+Fi/KrAaPxVoEehqWLBzCv0u/ZLZarmOih5rcgNTLMQ3l1/nVNtx+VP3eLtqA5a71JqntFfS80adH/3Px+Wen0lIixRqguGNzrD8ZGcU= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 23 02:16:04 localhost python3[23954]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Feb 23 02:16:05 localhost python3[23969]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005626465.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-669c-02d2-000000000022-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:16:06 localhost python3[23989]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-669c-02d2-000000000023-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:16:06 localhost systemd[1]: Starting Hostname Service... Feb 23 02:16:06 localhost systemd[1]: Started Hostname Service. Feb 23 02:16:06 localhost systemd-hostnamed[23993]: Hostname set to (static) Feb 23 02:16:06 localhost NetworkManager[5987]: [1771830966.8091] hostname: static hostname changed from "np0005626465.novalocal" to "np0005626465.localdomain" Feb 23 02:16:06 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 23 02:16:06 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 23 02:16:08 localhost systemd-logind[759]: Session 10 logged out. Waiting for processes to exit. Feb 23 02:16:08 localhost systemd[1]: session-10.scope: Deactivated successfully. Feb 23 02:16:08 localhost systemd[1]: session-10.scope: Consumed 1min 45.004s CPU time. Feb 23 02:16:08 localhost systemd-logind[759]: Removed session 10. Feb 23 02:16:11 localhost sshd[24004]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:16:11 localhost systemd-logind[759]: New session 11 of user zuul. Feb 23 02:16:11 localhost systemd[1]: Started Session 11 of User zuul. Feb 23 02:16:11 localhost python3[24021]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Feb 23 02:16:12 localhost sshd[24022]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:16:13 localhost systemd[1]: session-11.scope: Deactivated successfully. Feb 23 02:16:13 localhost systemd-logind[759]: Session 11 logged out. Waiting for processes to exit. Feb 23 02:16:13 localhost systemd-logind[759]: Removed session 11. Feb 23 02:16:16 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 23 02:16:36 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 23 02:16:39 localhost sshd[24026]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:17:05 localhost sshd[24028]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:17:05 localhost systemd-logind[759]: New session 12 of user zuul. Feb 23 02:17:05 localhost systemd[1]: Started Session 12 of User zuul. Feb 23 02:17:05 localhost python3[24047]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:17:09 localhost systemd[1]: Reloading. Feb 23 02:17:09 localhost sshd[24066]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:17:09 localhost systemd-rc-local-generator[24089]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:17:09 localhost systemd-sysv-generator[24095]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:17:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:17:09 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Feb 23 02:17:10 localhost systemd[1]: Reloading. Feb 23 02:17:10 localhost systemd-rc-local-generator[24132]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:17:10 localhost systemd-sysv-generator[24137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:17:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:17:10 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 23 02:17:10 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Feb 23 02:17:10 localhost systemd[1]: Reloading. Feb 23 02:17:10 localhost systemd-rc-local-generator[24171]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:17:10 localhost systemd-sysv-generator[24176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:17:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:17:10 localhost systemd[1]: Listening on LVM2 poll daemon socket. Feb 23 02:17:10 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 02:17:10 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 02:17:10 localhost systemd[1]: Reloading. Feb 23 02:17:10 localhost systemd-rc-local-generator[24228]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:17:10 localhost systemd-sysv-generator[24232]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:17:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:17:11 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 02:17:11 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 02:17:11 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 02:17:11 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 02:17:11 localhost systemd[1]: run-r1f18c742aa4e4962a5a59ff182db4513.service: Deactivated successfully. Feb 23 02:17:11 localhost systemd[1]: run-r6621b371f9254521b7deb7849cd35bae.service: Deactivated successfully. Feb 23 02:17:21 localhost sshd[24823]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:18:02 localhost sshd[24825]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:18:06 localhost sshd[24827]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:18:11 localhost systemd[1]: session-12.scope: Deactivated successfully. Feb 23 02:18:11 localhost systemd[1]: session-12.scope: Consumed 4.717s CPU time. Feb 23 02:18:11 localhost systemd-logind[759]: Session 12 logged out. Waiting for processes to exit. Feb 23 02:18:11 localhost systemd-logind[759]: Removed session 12. Feb 23 02:18:41 localhost sshd[24829]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:19:00 localhost sshd[24831]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:19:19 localhost sshd[24833]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:19:53 localhost sshd[24835]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:19:57 localhost sshd[24838]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:20:35 localhost sshd[24840]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:20:45 localhost sshd[24842]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:21:09 localhost sshd[24844]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:21:13 localhost sshd[24846]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:21:38 localhost sshd[24848]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:21:50 localhost sshd[24850]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:22:27 localhost sshd[24852]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:22:32 localhost sshd[24854]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:31:02 localhost sshd[24860]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:31:22 localhost sshd[24863]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:34:09 localhost sshd[24864]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:34:09 localhost systemd-logind[759]: New session 13 of user zuul. Feb 23 02:34:09 localhost systemd[1]: Started Session 13 of User zuul. Feb 23 02:34:09 localhost python3[24912]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 02:34:11 localhost python3[24999]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:34:14 localhost python3[25016]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:34:15 localhost python3[25032]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:34:15 localhost kernel: loop: module loaded Feb 23 02:34:15 localhost kernel: loop3: detected capacity change from 0 to 14680064 Feb 23 02:34:15 localhost python3[25058]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:34:16 localhost lvm[25061]: PV /dev/loop3 not used. Feb 23 02:34:16 localhost lvm[25063]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 23 02:34:16 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Feb 23 02:34:16 localhost lvm[25068]: 1 logical volume(s) in volume group "ceph_vg0" now active Feb 23 02:34:16 localhost lvm[25073]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 23 02:34:16 localhost lvm[25073]: VG ceph_vg0 finished Feb 23 02:34:16 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Feb 23 02:34:16 localhost python3[25121]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:34:17 localhost python3[25164]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771832056.4921207-55333-78803377132854/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:34:18 localhost python3[25194]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:34:18 localhost systemd[1]: Reloading. Feb 23 02:34:18 localhost systemd-rc-local-generator[25222]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:34:18 localhost systemd-sysv-generator[25226]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:34:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:34:18 localhost systemd[1]: Starting Ceph OSD losetup... Feb 23 02:34:18 localhost bash[25235]: /dev/loop3: [64516]:8400144 (/var/lib/ceph-osd-0.img) Feb 23 02:34:18 localhost systemd[1]: Finished Ceph OSD losetup. Feb 23 02:34:18 localhost lvm[25236]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 23 02:34:18 localhost lvm[25236]: VG ceph_vg0 finished Feb 23 02:34:20 localhost python3[25253]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:34:22 localhost python3[25270]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:34:23 localhost python3[25286]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:34:23 localhost kernel: loop4: detected capacity change from 0 to 14680064 Feb 23 02:34:24 localhost python3[25308]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:34:24 localhost lvm[25311]: PV /dev/loop4 not used. Feb 23 02:34:24 localhost lvm[25313]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 23 02:34:24 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Feb 23 02:34:24 localhost lvm[25323]: 1 logical volume(s) in volume group "ceph_vg1" now active Feb 23 02:34:24 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Feb 23 02:34:24 localhost python3[25371]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:34:25 localhost python3[25414]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771832064.5570326-55437-204757308708386/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:34:25 localhost python3[25444]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:34:25 localhost systemd[1]: Reloading. Feb 23 02:34:25 localhost systemd-sysv-generator[25477]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:34:25 localhost systemd-rc-local-generator[25473]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:34:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:34:26 localhost systemd[1]: Starting Ceph OSD losetup... Feb 23 02:34:26 localhost bash[25485]: /dev/loop4: [64516]:8399529 (/var/lib/ceph-osd-1.img) Feb 23 02:34:26 localhost systemd[1]: Finished Ceph OSD losetup. Feb 23 02:34:26 localhost lvm[25486]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 23 02:34:26 localhost lvm[25486]: VG ceph_vg1 finished Feb 23 02:34:35 localhost python3[25531]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Feb 23 02:34:37 localhost python3[25551]: ansible-hostname Invoked with name=np0005626465.localdomain use=None Feb 23 02:34:37 localhost systemd[1]: Starting Hostname Service... Feb 23 02:34:37 localhost systemd[1]: Started Hostname Service. Feb 23 02:34:39 localhost python3[25574]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Feb 23 02:34:40 localhost python3[25622]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.e98_o189tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:34:40 localhost python3[25652]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.e98_o189tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:34:41 localhost python3[25668]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.e98_o189tmphosts insertbefore=BOF block=192.168.122.106 np0005626463.localdomain np0005626463#012192.168.122.106 np0005626463.ctlplane.localdomain np0005626463.ctlplane#012192.168.122.107 np0005626465.localdomain np0005626465#012192.168.122.107 np0005626465.ctlplane.localdomain np0005626465.ctlplane#012192.168.122.108 np0005626466.localdomain np0005626466#012192.168.122.108 np0005626466.ctlplane.localdomain np0005626466.ctlplane#012192.168.122.103 np0005626459.localdomain np0005626459#012192.168.122.103 np0005626459.ctlplane.localdomain np0005626459.ctlplane#012192.168.122.104 np0005626460.localdomain np0005626460#012192.168.122.104 np0005626460.ctlplane.localdomain np0005626460.ctlplane#012192.168.122.105 np0005626461.localdomain np0005626461#012192.168.122.105 np0005626461.ctlplane.localdomain np0005626461.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:34:41 localhost python3[25684]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.e98_o189tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:34:42 localhost python3[25701]: ansible-file Invoked with path=/tmp/ansible.e98_o189tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:34:44 localhost python3[25717]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:34:45 localhost python3[25735]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:34:49 localhost python3[25784]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:34:49 localhost python3[25829]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771832088.7939324-56352-129787799545118/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:34:51 localhost python3[25859]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:34:52 localhost python3[25877]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 02:34:52 localhost chronyd[765]: chronyd exiting Feb 23 02:34:52 localhost systemd[1]: Stopping NTP client/server... Feb 23 02:34:52 localhost systemd[1]: chronyd.service: Deactivated successfully. Feb 23 02:34:52 localhost systemd[1]: Stopped NTP client/server. Feb 23 02:34:52 localhost systemd[1]: chronyd.service: Consumed 93ms CPU time, read 1.9M from disk, written 0B to disk. Feb 23 02:34:52 localhost systemd[1]: Starting NTP client/server... Feb 23 02:34:52 localhost chronyd[25884]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 23 02:34:52 localhost chronyd[25884]: Frequency -30.091 +/- 0.066 ppm read from /var/lib/chrony/drift Feb 23 02:34:52 localhost chronyd[25884]: Loaded seccomp filter (level 2) Feb 23 02:34:52 localhost systemd[1]: Started NTP client/server. Feb 23 02:34:54 localhost python3[25933]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:34:54 localhost python3[25976]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771832094.2558992-56515-147418021858555/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:34:55 localhost python3[26006]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:34:55 localhost systemd[1]: Reloading. Feb 23 02:34:55 localhost systemd-sysv-generator[26032]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:34:55 localhost systemd-rc-local-generator[26029]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:34:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:34:55 localhost systemd[1]: Reloading. Feb 23 02:34:56 localhost systemd-sysv-generator[26075]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:34:56 localhost systemd-rc-local-generator[26069]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:34:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:34:56 localhost systemd[1]: Starting chronyd online sources service... Feb 23 02:34:56 localhost chronyc[26082]: 200 OK Feb 23 02:34:56 localhost systemd[1]: chrony-online.service: Deactivated successfully. Feb 23 02:34:56 localhost systemd[1]: Finished chronyd online sources service. Feb 23 02:34:57 localhost chronyd[25884]: Selected source 216.128.178.20 (pool.ntp.org) Feb 23 02:34:57 localhost python3[26098]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:34:57 localhost chronyd[25884]: System clock was stepped by 0.001392 seconds Feb 23 02:34:57 localhost python3[26115]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:35:07 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 23 02:35:08 localhost python3[26135]: ansible-timezone Invoked with name=UTC hwclock=None Feb 23 02:35:08 localhost systemd[1]: Starting Time & Date Service... Feb 23 02:35:08 localhost systemd[1]: Started Time & Date Service. Feb 23 02:35:08 localhost python3[26155]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 02:35:09 localhost chronyd[25884]: chronyd exiting Feb 23 02:35:09 localhost systemd[1]: Stopping NTP client/server... Feb 23 02:35:09 localhost systemd[1]: chronyd.service: Deactivated successfully. Feb 23 02:35:09 localhost systemd[1]: Stopped NTP client/server. Feb 23 02:35:09 localhost systemd[1]: Starting NTP client/server... Feb 23 02:35:09 localhost chronyd[26162]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 23 02:35:09 localhost chronyd[26162]: Frequency -30.091 +/- 0.091 ppm read from /var/lib/chrony/drift Feb 23 02:35:09 localhost chronyd[26162]: Loaded seccomp filter (level 2) Feb 23 02:35:09 localhost systemd[1]: Started NTP client/server. Feb 23 02:35:13 localhost chronyd[26162]: Selected source 167.160.187.179 (pool.ntp.org) Feb 23 02:35:38 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 23 02:35:51 localhost sshd[26359]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:36:30 localhost sshd[26361]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:36:47 localhost sshd[26362]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:08 localhost sshd[26364]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:08 localhost systemd-logind[759]: New session 14 of user ceph-admin. Feb 23 02:37:08 localhost systemd[1]: Created slice User Slice of UID 1002. Feb 23 02:37:08 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Feb 23 02:37:08 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Feb 23 02:37:08 localhost systemd[1]: Starting User Manager for UID 1002... Feb 23 02:37:09 localhost sshd[26381]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:09 localhost systemd[26368]: Queued start job for default target Main User Target. Feb 23 02:37:09 localhost systemd[26368]: Created slice User Application Slice. Feb 23 02:37:09 localhost systemd[26368]: Started Mark boot as successful after the user session has run 2 minutes. Feb 23 02:37:09 localhost systemd[26368]: Started Daily Cleanup of User's Temporary Directories. Feb 23 02:37:09 localhost systemd[26368]: Reached target Paths. Feb 23 02:37:09 localhost systemd[26368]: Reached target Timers. Feb 23 02:37:09 localhost systemd[26368]: Starting D-Bus User Message Bus Socket... Feb 23 02:37:09 localhost systemd[26368]: Starting Create User's Volatile Files and Directories... Feb 23 02:37:09 localhost systemd[26368]: Finished Create User's Volatile Files and Directories. Feb 23 02:37:09 localhost systemd[26368]: Listening on D-Bus User Message Bus Socket. Feb 23 02:37:09 localhost systemd[26368]: Reached target Sockets. Feb 23 02:37:09 localhost systemd[26368]: Reached target Basic System. Feb 23 02:37:09 localhost systemd[26368]: Reached target Main User Target. Feb 23 02:37:09 localhost systemd[26368]: Startup finished in 141ms. Feb 23 02:37:09 localhost systemd[1]: Started User Manager for UID 1002. Feb 23 02:37:09 localhost systemd[1]: Started Session 14 of User ceph-admin. Feb 23 02:37:09 localhost systemd-logind[759]: New session 16 of user ceph-admin. Feb 23 02:37:09 localhost systemd[1]: Started Session 16 of User ceph-admin. Feb 23 02:37:09 localhost sshd[26403]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:09 localhost systemd-logind[759]: New session 17 of user ceph-admin. Feb 23 02:37:09 localhost systemd[1]: Started Session 17 of User ceph-admin. Feb 23 02:37:09 localhost sshd[26422]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:10 localhost systemd-logind[759]: New session 18 of user ceph-admin. Feb 23 02:37:10 localhost systemd[1]: Started Session 18 of User ceph-admin. Feb 23 02:37:10 localhost sshd[26441]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:10 localhost systemd-logind[759]: New session 19 of user ceph-admin. Feb 23 02:37:10 localhost systemd[1]: Started Session 19 of User ceph-admin. Feb 23 02:37:10 localhost sshd[26460]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:11 localhost systemd-logind[759]: New session 20 of user ceph-admin. Feb 23 02:37:11 localhost systemd[1]: Started Session 20 of User ceph-admin. Feb 23 02:37:11 localhost sshd[26479]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:11 localhost systemd-logind[759]: New session 21 of user ceph-admin. Feb 23 02:37:11 localhost systemd[1]: Started Session 21 of User ceph-admin. Feb 23 02:37:11 localhost sshd[26498]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:11 localhost systemd-logind[759]: New session 22 of user ceph-admin. Feb 23 02:37:11 localhost systemd[1]: Started Session 22 of User ceph-admin. Feb 23 02:37:12 localhost sshd[26517]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:12 localhost systemd-logind[759]: New session 23 of user ceph-admin. Feb 23 02:37:12 localhost systemd[1]: Started Session 23 of User ceph-admin. Feb 23 02:37:12 localhost sshd[26536]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:12 localhost systemd-logind[759]: New session 24 of user ceph-admin. Feb 23 02:37:12 localhost systemd[1]: Started Session 24 of User ceph-admin. Feb 23 02:37:13 localhost sshd[26553]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:13 localhost systemd-logind[759]: New session 25 of user ceph-admin. Feb 23 02:37:13 localhost systemd[1]: Started Session 25 of User ceph-admin. Feb 23 02:37:13 localhost sshd[26572]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:13 localhost systemd-logind[759]: New session 26 of user ceph-admin. Feb 23 02:37:13 localhost systemd[1]: Started Session 26 of User ceph-admin. Feb 23 02:37:13 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:37:16 localhost sshd[26611]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:39 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:37:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:37:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:37:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:37:40 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26789 (sysctl) Feb 23 02:37:40 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Feb 23 02:37:40 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Feb 23 02:37:41 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:37:41 localhost sshd[26862]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:37:42 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:37:51 localhost kernel: VFS: idmapped mount is not enabled. Feb 23 02:38:11 localhost podman[26932]: Feb 23 02:38:11 localhost podman[26932]: 2026-02-23 07:38:11.273366367 +0000 UTC m=+29.011687771 container create 0216f1993767448b5de7abfbef7c91fee509b316b756369c2216e36c3d084c95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_banzai, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1770267347, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, version=7, vendor=Red Hat, Inc.) Feb 23 02:38:11 localhost systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck218897468-merged.mount: Deactivated successfully. Feb 23 02:38:11 localhost systemd[1]: Created slice Slice /machine. Feb 23 02:38:11 localhost systemd[1]: Started libpod-conmon-0216f1993767448b5de7abfbef7c91fee509b316b756369c2216e36c3d084c95.scope. Feb 23 02:38:11 localhost systemd[1]: Started libcrun container. Feb 23 02:38:11 localhost podman[26932]: 2026-02-23 07:37:42.304196163 +0000 UTC m=+0.042517617 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:11 localhost podman[26932]: 2026-02-23 07:38:11.388263184 +0000 UTC m=+29.126584608 container init 0216f1993767448b5de7abfbef7c91fee509b316b756369c2216e36c3d084c95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_banzai, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64) Feb 23 02:38:11 localhost podman[26932]: 2026-02-23 07:38:11.401948584 +0000 UTC m=+29.140270018 container start 0216f1993767448b5de7abfbef7c91fee509b316b756369c2216e36c3d084c95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_banzai, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, name=rhceph, distribution-scope=public, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, release=1770267347, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 02:38:11 localhost podman[26932]: 2026-02-23 07:38:11.402428146 +0000 UTC m=+29.140749580 container attach 0216f1993767448b5de7abfbef7c91fee509b316b756369c2216e36c3d084c95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_banzai, architecture=x86_64, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.buildah.version=1.42.2, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 02:38:11 localhost festive_banzai[27058]: 167 167 Feb 23 02:38:11 localhost systemd[1]: libpod-0216f1993767448b5de7abfbef7c91fee509b316b756369c2216e36c3d084c95.scope: Deactivated successfully. Feb 23 02:38:11 localhost podman[26932]: 2026-02-23 07:38:11.405712167 +0000 UTC m=+29.144033661 container died 0216f1993767448b5de7abfbef7c91fee509b316b756369c2216e36c3d084c95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_banzai, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1770267347, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 02:38:11 localhost podman[27063]: 2026-02-23 07:38:11.505750181 +0000 UTC m=+0.086187987 container remove 0216f1993767448b5de7abfbef7c91fee509b316b756369c2216e36c3d084c95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_banzai, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , release=1770267347, distribution-scope=public, name=rhceph, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, architecture=x86_64) Feb 23 02:38:11 localhost systemd[1]: libpod-conmon-0216f1993767448b5de7abfbef7c91fee509b316b756369c2216e36c3d084c95.scope: Deactivated successfully. Feb 23 02:38:11 localhost podman[27085]: Feb 23 02:38:11 localhost podman[27085]: 2026-02-23 07:38:11.740933164 +0000 UTC m=+0.089831815 container create 33d221bd21554400a3e6e6b5ca833578649f57a415c0d81794f2e39d6d3f1c41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_bohr, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, release=1770267347, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True) Feb 23 02:38:11 localhost systemd[1]: Started libpod-conmon-33d221bd21554400a3e6e6b5ca833578649f57a415c0d81794f2e39d6d3f1c41.scope. Feb 23 02:38:11 localhost podman[27085]: 2026-02-23 07:38:11.702244153 +0000 UTC m=+0.051142794 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:11 localhost systemd[1]: Started libcrun container. Feb 23 02:38:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e432229855dc840fcbea44687ebd71189dac40e5dd2cc8285f73c6c1e07c8d3/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e432229855dc840fcbea44687ebd71189dac40e5dd2cc8285f73c6c1e07c8d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:11 localhost podman[27085]: 2026-02-23 07:38:11.838874441 +0000 UTC m=+0.187773082 container init 33d221bd21554400a3e6e6b5ca833578649f57a415c0d81794f2e39d6d3f1c41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_bohr, release=1770267347, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2) Feb 23 02:38:11 localhost podman[27085]: 2026-02-23 07:38:11.850562229 +0000 UTC m=+0.199460830 container start 33d221bd21554400a3e6e6b5ca833578649f57a415c0d81794f2e39d6d3f1c41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_bohr, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, release=1770267347, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, version=7, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main) Feb 23 02:38:11 localhost podman[27085]: 2026-02-23 07:38:11.850843252 +0000 UTC m=+0.199741913 container attach 33d221bd21554400a3e6e6b5ca833578649f57a415c0d81794f2e39d6d3f1c41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_bohr, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, version=7, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 02:38:12 localhost systemd[1]: var-lib-containers-storage-overlay-a303c7e7e9aae0363cf41c7172d66f2486468ffb6b5815fa04733197971c8fff-merged.mount: Deactivated successfully. Feb 23 02:38:12 localhost condescending_bohr[27101]: [ Feb 23 02:38:12 localhost condescending_bohr[27101]: { Feb 23 02:38:12 localhost condescending_bohr[27101]: "available": false, Feb 23 02:38:12 localhost condescending_bohr[27101]: "ceph_device": false, Feb 23 02:38:12 localhost condescending_bohr[27101]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 23 02:38:12 localhost condescending_bohr[27101]: "lsm_data": {}, Feb 23 02:38:12 localhost condescending_bohr[27101]: "lvs": [], Feb 23 02:38:12 localhost condescending_bohr[27101]: "path": "/dev/sr0", Feb 23 02:38:12 localhost condescending_bohr[27101]: "rejected_reasons": [ Feb 23 02:38:12 localhost condescending_bohr[27101]: "Has a FileSystem", Feb 23 02:38:12 localhost condescending_bohr[27101]: "Insufficient space (<5GB)" Feb 23 02:38:12 localhost condescending_bohr[27101]: ], Feb 23 02:38:12 localhost condescending_bohr[27101]: "sys_api": { Feb 23 02:38:12 localhost condescending_bohr[27101]: "actuators": null, Feb 23 02:38:12 localhost condescending_bohr[27101]: "device_nodes": "sr0", Feb 23 02:38:12 localhost condescending_bohr[27101]: "human_readable_size": "482.00 KB", Feb 23 02:38:12 localhost condescending_bohr[27101]: "id_bus": "ata", Feb 23 02:38:12 localhost condescending_bohr[27101]: "model": "QEMU DVD-ROM", Feb 23 02:38:12 localhost condescending_bohr[27101]: "nr_requests": "2", Feb 23 02:38:12 localhost condescending_bohr[27101]: "partitions": {}, Feb 23 02:38:12 localhost condescending_bohr[27101]: "path": "/dev/sr0", Feb 23 02:38:12 localhost condescending_bohr[27101]: "removable": "1", Feb 23 02:38:12 localhost condescending_bohr[27101]: "rev": "2.5+", Feb 23 02:38:12 localhost condescending_bohr[27101]: "ro": "0", Feb 23 02:38:12 localhost condescending_bohr[27101]: "rotational": "1", Feb 23 02:38:12 localhost condescending_bohr[27101]: "sas_address": "", Feb 23 02:38:12 localhost condescending_bohr[27101]: "sas_device_handle": "", Feb 23 02:38:12 localhost condescending_bohr[27101]: "scheduler_mode": "mq-deadline", Feb 23 02:38:12 localhost condescending_bohr[27101]: "sectors": 0, Feb 23 02:38:12 localhost condescending_bohr[27101]: "sectorsize": "2048", Feb 23 02:38:12 localhost condescending_bohr[27101]: "size": 493568.0, Feb 23 02:38:12 localhost condescending_bohr[27101]: "support_discard": "0", Feb 23 02:38:12 localhost condescending_bohr[27101]: "type": "disk", Feb 23 02:38:12 localhost condescending_bohr[27101]: "vendor": "QEMU" Feb 23 02:38:12 localhost condescending_bohr[27101]: } Feb 23 02:38:12 localhost condescending_bohr[27101]: } Feb 23 02:38:12 localhost condescending_bohr[27101]: ] Feb 23 02:38:12 localhost systemd[1]: libpod-33d221bd21554400a3e6e6b5ca833578649f57a415c0d81794f2e39d6d3f1c41.scope: Deactivated successfully. Feb 23 02:38:12 localhost podman[27085]: 2026-02-23 07:38:12.703090491 +0000 UTC m=+1.051989152 container died 33d221bd21554400a3e6e6b5ca833578649f57a415c0d81794f2e39d6d3f1c41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_bohr, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, version=7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 02:38:12 localhost systemd[1]: var-lib-containers-storage-overlay-3e432229855dc840fcbea44687ebd71189dac40e5dd2cc8285f73c6c1e07c8d3-merged.mount: Deactivated successfully. Feb 23 02:38:12 localhost podman[28230]: 2026-02-23 07:38:12.778542013 +0000 UTC m=+0.068388588 container remove 33d221bd21554400a3e6e6b5ca833578649f57a415c0d81794f2e39d6d3f1c41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_bohr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, version=7, build-date=2026-02-09T10:25:24Z, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph) Feb 23 02:38:12 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:38:12 localhost systemd[1]: libpod-conmon-33d221bd21554400a3e6e6b5ca833578649f57a415c0d81794f2e39d6d3f1c41.scope: Deactivated successfully. Feb 23 02:38:13 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:38:13 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Feb 23 02:38:13 localhost systemd[1]: Closed Process Core Dump Socket. Feb 23 02:38:13 localhost systemd[1]: Stopping Process Core Dump Socket... Feb 23 02:38:13 localhost systemd[1]: Listening on Process Core Dump Socket. Feb 23 02:38:13 localhost systemd[1]: Reloading. Feb 23 02:38:13 localhost systemd-rc-local-generator[28316]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:38:13 localhost systemd-sysv-generator[28319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:38:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:38:13 localhost systemd[1]: Reloading. Feb 23 02:38:13 localhost systemd-rc-local-generator[28350]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:38:13 localhost systemd-sysv-generator[28355]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:38:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:38:35 localhost sshd[28362]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:38:41 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:38:41 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:38:42 localhost podman[28690]: Feb 23 02:38:42 localhost podman[28690]: 2026-02-23 07:38:42.044503426 +0000 UTC m=+0.044058280 container create a582d25f1b3d12c75dbe4095cd1cd0fc082ac49534180afbf0dce1b95ff0452b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_lewin, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, distribution-scope=public, architecture=x86_64, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 02:38:42 localhost systemd[1]: Started libpod-conmon-a582d25f1b3d12c75dbe4095cd1cd0fc082ac49534180afbf0dce1b95ff0452b.scope. Feb 23 02:38:42 localhost systemd[1]: Started libcrun container. Feb 23 02:38:42 localhost podman[28690]: 2026-02-23 07:38:42.11328222 +0000 UTC m=+0.112837094 container init a582d25f1b3d12c75dbe4095cd1cd0fc082ac49534180afbf0dce1b95ff0452b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_lewin, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.42.2, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 02:38:42 localhost podman[28690]: 2026-02-23 07:38:42.123415873 +0000 UTC m=+0.122970727 container start a582d25f1b3d12c75dbe4095cd1cd0fc082ac49534180afbf0dce1b95ff0452b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_lewin, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, ceph=True) Feb 23 02:38:42 localhost podman[28690]: 2026-02-23 07:38:42.123636778 +0000 UTC m=+0.123191622 container attach a582d25f1b3d12c75dbe4095cd1cd0fc082ac49534180afbf0dce1b95ff0452b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_lewin, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, release=1770267347, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, vcs-type=git, io.buildah.version=1.42.2, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, name=rhceph, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 02:38:42 localhost podman[28690]: 2026-02-23 07:38:42.024775671 +0000 UTC m=+0.024330525 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:42 localhost goofy_lewin[28706]: 167 167 Feb 23 02:38:42 localhost systemd[1]: libpod-a582d25f1b3d12c75dbe4095cd1cd0fc082ac49534180afbf0dce1b95ff0452b.scope: Deactivated successfully. Feb 23 02:38:42 localhost podman[28690]: 2026-02-23 07:38:42.127164424 +0000 UTC m=+0.126719268 container died a582d25f1b3d12c75dbe4095cd1cd0fc082ac49534180afbf0dce1b95ff0452b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_lewin, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1770267347, com.redhat.component=rhceph-container, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.42.2, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 02:38:42 localhost podman[28711]: 2026-02-23 07:38:42.217068716 +0000 UTC m=+0.077619928 container remove a582d25f1b3d12c75dbe4095cd1cd0fc082ac49534180afbf0dce1b95ff0452b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_lewin, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 02:38:42 localhost systemd[1]: libpod-conmon-a582d25f1b3d12c75dbe4095cd1cd0fc082ac49534180afbf0dce1b95ff0452b.scope: Deactivated successfully. Feb 23 02:38:42 localhost systemd[1]: Reloading. Feb 23 02:38:42 localhost systemd-rc-local-generator[28749]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:38:42 localhost systemd-sysv-generator[28753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:38:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:38:42 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:38:42 localhost systemd[1]: Reloading. Feb 23 02:38:42 localhost systemd-rc-local-generator[28789]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:38:42 localhost systemd-sysv-generator[28792]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:38:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:38:42 localhost systemd[1]: Reached target All Ceph clusters and services. Feb 23 02:38:42 localhost systemd[1]: Reloading. Feb 23 02:38:42 localhost systemd-rc-local-generator[28833]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:38:42 localhost systemd-sysv-generator[28836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:38:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:38:43 localhost systemd[1]: Reached target Ceph cluster f1fea371-cb69-578d-a3d0-b5c472a84b46. Feb 23 02:38:43 localhost systemd[1]: Reloading. Feb 23 02:38:43 localhost systemd-sysv-generator[28877]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:38:43 localhost systemd-rc-local-generator[28874]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:38:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:38:43 localhost systemd[1]: Reloading. Feb 23 02:38:43 localhost systemd-rc-local-generator[28911]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:38:43 localhost systemd-sysv-generator[28916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:38:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:38:43 localhost systemd[1]: Created slice Slice /system/ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46. Feb 23 02:38:43 localhost systemd[1]: Reached target System Time Set. Feb 23 02:38:43 localhost systemd[1]: Reached target System Time Synchronized. Feb 23 02:38:43 localhost systemd[1]: Starting Ceph crash.np0005626465 for f1fea371-cb69-578d-a3d0-b5c472a84b46... Feb 23 02:38:43 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:38:43 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 23 02:38:43 localhost podman[28975]: Feb 23 02:38:43 localhost podman[28975]: 2026-02-23 07:38:43.861785348 +0000 UTC m=+0.064810079 container create 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, build-date=2026-02-09T10:25:24Z, architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 02:38:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431d288c9abf4ab24667b45fa312564f701c57e376b9561c29a61ddcdff57dee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431d288c9abf4ab24667b45fa312564f701c57e376b9561c29a61ddcdff57dee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431d288c9abf4ab24667b45fa312564f701c57e376b9561c29a61ddcdff57dee/merged/etc/ceph/ceph.client.crash.np0005626465.keyring supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:43 localhost podman[28975]: 2026-02-23 07:38:43.831433868 +0000 UTC m=+0.034458629 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:43 localhost podman[28975]: 2026-02-23 07:38:43.95711577 +0000 UTC m=+0.160140501 container init 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, release=1770267347, description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, distribution-scope=public, com.redhat.component=rhceph-container) Feb 23 02:38:43 localhost podman[28975]: 2026-02-23 07:38:43.969004256 +0000 UTC m=+0.172028987 container start 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux ) Feb 23 02:38:43 localhost bash[28975]: 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd Feb 23 02:38:43 localhost systemd[1]: Started Ceph crash.np0005626465 for f1fea371-cb69-578d-a3d0-b5c472a84b46. Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: INFO:ceph-crash:pinging cluster to exercise our key, trying key client.crash.np0005626465. Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: cluster: Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: id: f1fea371-cb69-578d-a3d0-b5c472a84b46 Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: health: HEALTH_WARN Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: OSD count 0 < osd_pool_default_size 3 Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: services: Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: mon: 3 daemons, quorum np0005626459,np0005626461,np0005626460 (age 12s) Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: mgr: np0005626459.pmtxxl(active, since 2m), standbys: np0005626461.lrfquh, np0005626460.fyrady Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: osd: 0 osds: 0 up, 0 in Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: data: Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: pools: 0 pools, 0 pgs Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: objects: 0 objects, 0 B Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: usage: 0 B used, 0 B / 0 B avail Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: pgs: Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: progress: Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: Updating crash deployment (+4 -> 6) (7s) Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: [=====================.......] (remaining: 2s) Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: Feb 23 02:38:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465[28989]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Feb 23 02:38:44 localhost podman[29085]: Feb 23 02:38:44 localhost podman[29085]: 2026-02-23 07:38:44.713641733 +0000 UTC m=+0.062979365 container create 40c2f938df76092c579a620b1ddfad3a1489b7f48754fe0ef2b99b11ff0df29d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_allen, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.42.2, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , version=7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 02:38:44 localhost systemd[1]: Started libpod-conmon-40c2f938df76092c579a620b1ddfad3a1489b7f48754fe0ef2b99b11ff0df29d.scope. Feb 23 02:38:44 localhost systemd[1]: Started libcrun container. Feb 23 02:38:44 localhost podman[29085]: 2026-02-23 07:38:44.78376366 +0000 UTC m=+0.133101302 container init 40c2f938df76092c579a620b1ddfad3a1489b7f48754fe0ef2b99b11ff0df29d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_allen, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, version=7, distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.tags=rhceph ceph, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container) Feb 23 02:38:44 localhost podman[29085]: 2026-02-23 07:38:44.68436421 +0000 UTC m=+0.033701842 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:44 localhost podman[29085]: 2026-02-23 07:38:44.793636337 +0000 UTC m=+0.142973969 container start 40c2f938df76092c579a620b1ddfad3a1489b7f48754fe0ef2b99b11ff0df29d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_allen, GIT_CLEAN=True, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 02:38:44 localhost podman[29085]: 2026-02-23 07:38:44.793856313 +0000 UTC m=+0.143194005 container attach 40c2f938df76092c579a620b1ddfad3a1489b7f48754fe0ef2b99b11ff0df29d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_allen, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, release=1770267347, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True) Feb 23 02:38:44 localhost laughing_allen[29100]: 167 167 Feb 23 02:38:44 localhost systemd[1]: libpod-40c2f938df76092c579a620b1ddfad3a1489b7f48754fe0ef2b99b11ff0df29d.scope: Deactivated successfully. Feb 23 02:38:44 localhost podman[29085]: 2026-02-23 07:38:44.796958117 +0000 UTC m=+0.146295779 container died 40c2f938df76092c579a620b1ddfad3a1489b7f48754fe0ef2b99b11ff0df29d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_allen, name=rhceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 02:38:44 localhost podman[29105]: 2026-02-23 07:38:44.894976414 +0000 UTC m=+0.083263453 container remove 40c2f938df76092c579a620b1ddfad3a1489b7f48754fe0ef2b99b11ff0df29d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_allen, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, release=1770267347, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 02:38:44 localhost systemd[1]: libpod-conmon-40c2f938df76092c579a620b1ddfad3a1489b7f48754fe0ef2b99b11ff0df29d.scope: Deactivated successfully. Feb 23 02:38:44 localhost systemd[1]: tmp-crun.ZVVBpI.mount: Deactivated successfully. Feb 23 02:38:44 localhost systemd[1]: var-lib-containers-storage-overlay-3d2350688cf0bd11eedeba696e0d31e1f9021968b683a40e4c4c9731249e0ac1-merged.mount: Deactivated successfully. Feb 23 02:38:45 localhost podman[29126]: Feb 23 02:38:45 localhost podman[29126]: 2026-02-23 07:38:45.102899774 +0000 UTC m=+0.071218473 container create 64471334d03aa958b103d0616b68c43eaafee34237ca769020514771295f1e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hoover, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux , ceph=True, GIT_CLEAN=True, release=1770267347, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=) Feb 23 02:38:45 localhost systemd[1]: Started libpod-conmon-64471334d03aa958b103d0616b68c43eaafee34237ca769020514771295f1e95.scope. Feb 23 02:38:45 localhost systemd[1]: Started libcrun container. Feb 23 02:38:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13a05c2ec96c53f7fca073973a21369fa60da8dcc92cd0143af57abaaeb7e4/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:45 localhost podman[29126]: 2026-02-23 07:38:45.074215185 +0000 UTC m=+0.042533904 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13a05c2ec96c53f7fca073973a21369fa60da8dcc92cd0143af57abaaeb7e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13a05c2ec96c53f7fca073973a21369fa60da8dcc92cd0143af57abaaeb7e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13a05c2ec96c53f7fca073973a21369fa60da8dcc92cd0143af57abaaeb7e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13a05c2ec96c53f7fca073973a21369fa60da8dcc92cd0143af57abaaeb7e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:45 localhost podman[29126]: 2026-02-23 07:38:45.224256092 +0000 UTC m=+0.192574781 container init 64471334d03aa958b103d0616b68c43eaafee34237ca769020514771295f1e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hoover, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, release=1770267347, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main) Feb 23 02:38:45 localhost podman[29126]: 2026-02-23 07:38:45.234497699 +0000 UTC m=+0.202816388 container start 64471334d03aa958b103d0616b68c43eaafee34237ca769020514771295f1e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hoover, distribution-scope=public, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347) Feb 23 02:38:45 localhost podman[29126]: 2026-02-23 07:38:45.235103063 +0000 UTC m=+0.203421752 container attach 64471334d03aa958b103d0616b68c43eaafee34237ca769020514771295f1e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hoover, name=rhceph, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 02:38:45 localhost angry_hoover[29141]: --> passed data devices: 0 physical, 2 LVM Feb 23 02:38:45 localhost angry_hoover[29141]: --> relative data size: 1.0 Feb 23 02:38:45 localhost angry_hoover[29141]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 23 02:38:45 localhost angry_hoover[29141]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7c88b276-d2e3-48a7-91d4-30742e429227 Feb 23 02:38:46 localhost lvm[29195]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 23 02:38:46 localhost lvm[29195]: VG ceph_vg0 finished Feb 23 02:38:46 localhost angry_hoover[29141]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 23 02:38:46 localhost angry_hoover[29141]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Feb 23 02:38:46 localhost angry_hoover[29141]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Feb 23 02:38:46 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 23 02:38:46 localhost angry_hoover[29141]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block Feb 23 02:38:46 localhost angry_hoover[29141]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap Feb 23 02:38:46 localhost angry_hoover[29141]: stderr: got monmap epoch 3 Feb 23 02:38:46 localhost angry_hoover[29141]: --> Creating keyring file for osd.0 Feb 23 02:38:46 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring Feb 23 02:38:46 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/ Feb 23 02:38:46 localhost angry_hoover[29141]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 7c88b276-d2e3-48a7-91d4-30742e429227 --setuser ceph --setgroup ceph Feb 23 02:38:49 localhost angry_hoover[29141]: stderr: 2026-02-23T07:38:46.753+0000 7ff52a66fa80 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Feb 23 02:38:49 localhost angry_hoover[29141]: stderr: 2026-02-23T07:38:46.754+0000 7ff52a66fa80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid Feb 23 02:38:49 localhost angry_hoover[29141]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Feb 23 02:38:49 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Feb 23 02:38:49 localhost angry_hoover[29141]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config Feb 23 02:38:49 localhost angry_hoover[29141]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block Feb 23 02:38:49 localhost angry_hoover[29141]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Feb 23 02:38:49 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 23 02:38:49 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Feb 23 02:38:49 localhost angry_hoover[29141]: --> ceph-volume lvm activate successful for osd ID: 0 Feb 23 02:38:49 localhost angry_hoover[29141]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Feb 23 02:38:49 localhost angry_hoover[29141]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 23 02:38:49 localhost angry_hoover[29141]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 297f7b49-6343-46a1-9f7d-f773f75868d6 Feb 23 02:38:50 localhost lvm[30130]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 23 02:38:50 localhost lvm[30130]: VG ceph_vg1 finished Feb 23 02:38:50 localhost angry_hoover[29141]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 23 02:38:50 localhost angry_hoover[29141]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3 Feb 23 02:38:50 localhost angry_hoover[29141]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Feb 23 02:38:50 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 23 02:38:50 localhost angry_hoover[29141]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-3/block Feb 23 02:38:50 localhost angry_hoover[29141]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap Feb 23 02:38:50 localhost angry_hoover[29141]: stderr: got monmap epoch 3 Feb 23 02:38:50 localhost angry_hoover[29141]: --> Creating keyring file for osd.3 Feb 23 02:38:50 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring Feb 23 02:38:50 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/ Feb 23 02:38:50 localhost angry_hoover[29141]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 297f7b49-6343-46a1-9f7d-f773f75868d6 --setuser ceph --setgroup ceph Feb 23 02:38:52 localhost angry_hoover[29141]: stderr: 2026-02-23T07:38:50.738+0000 7fd96f29fa80 -1 bluestore(/var/lib/ceph/osd/ceph-3//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Feb 23 02:38:52 localhost angry_hoover[29141]: stderr: 2026-02-23T07:38:50.738+0000 7fd96f29fa80 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid Feb 23 02:38:52 localhost angry_hoover[29141]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Feb 23 02:38:52 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Feb 23 02:38:52 localhost angry_hoover[29141]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-3 --no-mon-config Feb 23 02:38:53 localhost angry_hoover[29141]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-3/block Feb 23 02:38:53 localhost angry_hoover[29141]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block Feb 23 02:38:53 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 23 02:38:53 localhost angry_hoover[29141]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Feb 23 02:38:53 localhost angry_hoover[29141]: --> ceph-volume lvm activate successful for osd ID: 3 Feb 23 02:38:53 localhost angry_hoover[29141]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Feb 23 02:38:53 localhost systemd[1]: libpod-64471334d03aa958b103d0616b68c43eaafee34237ca769020514771295f1e95.scope: Deactivated successfully. Feb 23 02:38:53 localhost systemd[1]: libpod-64471334d03aa958b103d0616b68c43eaafee34237ca769020514771295f1e95.scope: Consumed 3.497s CPU time. Feb 23 02:38:53 localhost podman[29126]: 2026-02-23 07:38:53.09511661 +0000 UTC m=+8.063435349 container died 64471334d03aa958b103d0616b68c43eaafee34237ca769020514771295f1e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hoover, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, version=7, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347) Feb 23 02:38:53 localhost systemd[1]: var-lib-containers-storage-overlay-9f13a05c2ec96c53f7fca073973a21369fa60da8dcc92cd0143af57abaaeb7e4-merged.mount: Deactivated successfully. Feb 23 02:38:53 localhost podman[31036]: 2026-02-23 07:38:53.188112806 +0000 UTC m=+0.081482961 container remove 64471334d03aa958b103d0616b68c43eaafee34237ca769020514771295f1e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hoover, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, version=7, release=1770267347, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 02:38:53 localhost systemd[1]: libpod-conmon-64471334d03aa958b103d0616b68c43eaafee34237ca769020514771295f1e95.scope: Deactivated successfully. Feb 23 02:38:53 localhost podman[31115]: Feb 23 02:38:53 localhost podman[31115]: 2026-02-23 07:38:53.91597845 +0000 UTC m=+0.070914196 container create a017ce1cce76512575a73fdb25a208f1a5ad36ccf66791cb18c113775acc9e82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_johnson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, ceph=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7) Feb 23 02:38:53 localhost systemd[1]: Started libpod-conmon-a017ce1cce76512575a73fdb25a208f1a5ad36ccf66791cb18c113775acc9e82.scope. Feb 23 02:38:53 localhost systemd[1]: Started libcrun container. Feb 23 02:38:53 localhost podman[31115]: 2026-02-23 07:38:53.981529226 +0000 UTC m=+0.136464962 container init a017ce1cce76512575a73fdb25a208f1a5ad36ccf66791cb18c113775acc9e82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_johnson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, version=7, RELEASE=main, name=rhceph, ceph=True, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 02:38:53 localhost podman[31115]: 2026-02-23 07:38:53.886563942 +0000 UTC m=+0.041499728 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:53 localhost podman[31115]: 2026-02-23 07:38:53.990795369 +0000 UTC m=+0.145731115 container start a017ce1cce76512575a73fdb25a208f1a5ad36ccf66791cb18c113775acc9e82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_johnson, release=1770267347, vcs-type=git, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, ceph=True, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main) Feb 23 02:38:53 localhost podman[31115]: 2026-02-23 07:38:53.991084036 +0000 UTC m=+0.146019812 container attach a017ce1cce76512575a73fdb25a208f1a5ad36ccf66791cb18c113775acc9e82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_johnson, io.buildah.version=1.42.2, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1770267347, GIT_CLEAN=True, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 02:38:53 localhost distracted_johnson[31131]: 167 167 Feb 23 02:38:53 localhost systemd[1]: libpod-a017ce1cce76512575a73fdb25a208f1a5ad36ccf66791cb18c113775acc9e82.scope: Deactivated successfully. Feb 23 02:38:53 localhost podman[31115]: 2026-02-23 07:38:53.994711433 +0000 UTC m=+0.149647169 container died a017ce1cce76512575a73fdb25a208f1a5ad36ccf66791cb18c113775acc9e82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_johnson, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, build-date=2026-02-09T10:25:24Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux ) Feb 23 02:38:54 localhost podman[31136]: 2026-02-23 07:38:54.079386319 +0000 UTC m=+0.076243794 container remove a017ce1cce76512575a73fdb25a208f1a5ad36ccf66791cb18c113775acc9e82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_johnson, GIT_BRANCH=main, version=7, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7) Feb 23 02:38:54 localhost systemd[1]: libpod-conmon-a017ce1cce76512575a73fdb25a208f1a5ad36ccf66791cb18c113775acc9e82.scope: Deactivated successfully. Feb 23 02:38:54 localhost systemd[1]: var-lib-containers-storage-overlay-d69399e55a53d236b9f5e5fab5595ff7eb14cd173148a89e8c97fdaa3763fc06-merged.mount: Deactivated successfully. Feb 23 02:38:54 localhost podman[31159]: Feb 23 02:38:54 localhost podman[31159]: 2026-02-23 07:38:54.297081234 +0000 UTC m=+0.079462561 container create 7cc7111e48eeba7af9cb5a49d5228295bcbe2aeff56b5c3b9a082d9fb70231ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_solomon, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, name=rhceph, distribution-scope=public, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, vcs-type=git) Feb 23 02:38:54 localhost systemd[1]: Started libpod-conmon-7cc7111e48eeba7af9cb5a49d5228295bcbe2aeff56b5c3b9a082d9fb70231ca.scope. Feb 23 02:38:54 localhost systemd[1]: Started libcrun container. Feb 23 02:38:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b704b00839455abebd30b9cac8bbae364c82d5519a3fc7b571a7b5442b423917/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:54 localhost podman[31159]: 2026-02-23 07:38:54.26277973 +0000 UTC m=+0.045161077 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b704b00839455abebd30b9cac8bbae364c82d5519a3fc7b571a7b5442b423917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b704b00839455abebd30b9cac8bbae364c82d5519a3fc7b571a7b5442b423917/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:54 localhost podman[31159]: 2026-02-23 07:38:54.398954454 +0000 UTC m=+0.181335791 container init 7cc7111e48eeba7af9cb5a49d5228295bcbe2aeff56b5c3b9a082d9fb70231ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_solomon, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.42.2, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 02:38:54 localhost podman[31159]: 2026-02-23 07:38:54.408575966 +0000 UTC m=+0.190957293 container start 7cc7111e48eeba7af9cb5a49d5228295bcbe2aeff56b5c3b9a082d9fb70231ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_solomon, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, version=7, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main) Feb 23 02:38:54 localhost podman[31159]: 2026-02-23 07:38:54.408820752 +0000 UTC m=+0.191202079 container attach 7cc7111e48eeba7af9cb5a49d5228295bcbe2aeff56b5c3b9a082d9fb70231ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_solomon, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.buildah.version=1.42.2, release=1770267347, version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_CLEAN=True) Feb 23 02:38:54 localhost condescending_solomon[31174]: { Feb 23 02:38:54 localhost condescending_solomon[31174]: "0": [ Feb 23 02:38:54 localhost condescending_solomon[31174]: { Feb 23 02:38:54 localhost condescending_solomon[31174]: "devices": [ Feb 23 02:38:54 localhost condescending_solomon[31174]: "/dev/loop3" Feb 23 02:38:54 localhost condescending_solomon[31174]: ], Feb 23 02:38:54 localhost condescending_solomon[31174]: "lv_name": "ceph_lv0", Feb 23 02:38:54 localhost condescending_solomon[31174]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Feb 23 02:38:54 localhost condescending_solomon[31174]: "lv_size": "7511998464", Feb 23 02:38:54 localhost condescending_solomon[31174]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=e1NUtM-ENau-dsAa-K69y-kXKX-QXa1-tvHrK2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f1fea371-cb69-578d-a3d0-b5c472a84b46,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7c88b276-d2e3-48a7-91d4-30742e429227,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Feb 23 02:38:54 localhost condescending_solomon[31174]: "lv_uuid": "e1NUtM-ENau-dsAa-K69y-kXKX-QXa1-tvHrK2", Feb 23 02:38:54 localhost condescending_solomon[31174]: "name": "ceph_lv0", Feb 23 02:38:54 localhost condescending_solomon[31174]: "path": "/dev/ceph_vg0/ceph_lv0", Feb 23 02:38:54 localhost condescending_solomon[31174]: "tags": { Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.block_uuid": "e1NUtM-ENau-dsAa-K69y-kXKX-QXa1-tvHrK2", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.cephx_lockbox_secret": "", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.cluster_fsid": "f1fea371-cb69-578d-a3d0-b5c472a84b46", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.cluster_name": "ceph", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.crush_device_class": "", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.encrypted": "0", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.osd_fsid": "7c88b276-d2e3-48a7-91d4-30742e429227", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.osd_id": "0", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.osdspec_affinity": "default_drive_group", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.type": "block", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.vdo": "0" Feb 23 02:38:54 localhost condescending_solomon[31174]: }, Feb 23 02:38:54 localhost condescending_solomon[31174]: "type": "block", Feb 23 02:38:54 localhost condescending_solomon[31174]: "vg_name": "ceph_vg0" Feb 23 02:38:54 localhost condescending_solomon[31174]: } Feb 23 02:38:54 localhost condescending_solomon[31174]: ], Feb 23 02:38:54 localhost condescending_solomon[31174]: "3": [ Feb 23 02:38:54 localhost condescending_solomon[31174]: { Feb 23 02:38:54 localhost condescending_solomon[31174]: "devices": [ Feb 23 02:38:54 localhost condescending_solomon[31174]: "/dev/loop4" Feb 23 02:38:54 localhost condescending_solomon[31174]: ], Feb 23 02:38:54 localhost condescending_solomon[31174]: "lv_name": "ceph_lv1", Feb 23 02:38:54 localhost condescending_solomon[31174]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Feb 23 02:38:54 localhost condescending_solomon[31174]: "lv_size": "7511998464", Feb 23 02:38:54 localhost condescending_solomon[31174]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=VCfQ9G-jMsC-0zkh-2YXH-A2Mn-BWzt-IhEnjw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f1fea371-cb69-578d-a3d0-b5c472a84b46,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=297f7b49-6343-46a1-9f7d-f773f75868d6,ceph.osd_id=3,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Feb 23 02:38:54 localhost condescending_solomon[31174]: "lv_uuid": "VCfQ9G-jMsC-0zkh-2YXH-A2Mn-BWzt-IhEnjw", Feb 23 02:38:54 localhost condescending_solomon[31174]: "name": "ceph_lv1", Feb 23 02:38:54 localhost condescending_solomon[31174]: "path": "/dev/ceph_vg1/ceph_lv1", Feb 23 02:38:54 localhost condescending_solomon[31174]: "tags": { Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.block_uuid": "VCfQ9G-jMsC-0zkh-2YXH-A2Mn-BWzt-IhEnjw", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.cephx_lockbox_secret": "", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.cluster_fsid": "f1fea371-cb69-578d-a3d0-b5c472a84b46", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.cluster_name": "ceph", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.crush_device_class": "", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.encrypted": "0", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.osd_fsid": "297f7b49-6343-46a1-9f7d-f773f75868d6", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.osd_id": "3", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.osdspec_affinity": "default_drive_group", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.type": "block", Feb 23 02:38:54 localhost condescending_solomon[31174]: "ceph.vdo": "0" Feb 23 02:38:54 localhost condescending_solomon[31174]: }, Feb 23 02:38:54 localhost condescending_solomon[31174]: "type": "block", Feb 23 02:38:54 localhost condescending_solomon[31174]: "vg_name": "ceph_vg1" Feb 23 02:38:54 localhost condescending_solomon[31174]: } Feb 23 02:38:54 localhost condescending_solomon[31174]: ] Feb 23 02:38:54 localhost condescending_solomon[31174]: } Feb 23 02:38:54 localhost systemd[1]: libpod-7cc7111e48eeba7af9cb5a49d5228295bcbe2aeff56b5c3b9a082d9fb70231ca.scope: Deactivated successfully. Feb 23 02:38:54 localhost podman[31159]: 2026-02-23 07:38:54.743130001 +0000 UTC m=+0.525511308 container died 7cc7111e48eeba7af9cb5a49d5228295bcbe2aeff56b5c3b9a082d9fb70231ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_solomon, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 02:38:54 localhost podman[31183]: 2026-02-23 07:38:54.836033145 +0000 UTC m=+0.083145340 container remove 7cc7111e48eeba7af9cb5a49d5228295bcbe2aeff56b5c3b9a082d9fb70231ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_solomon, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.42.2, ceph=True) Feb 23 02:38:54 localhost systemd[1]: libpod-conmon-7cc7111e48eeba7af9cb5a49d5228295bcbe2aeff56b5c3b9a082d9fb70231ca.scope: Deactivated successfully. Feb 23 02:38:55 localhost systemd[1]: var-lib-containers-storage-overlay-b704b00839455abebd30b9cac8bbae364c82d5519a3fc7b571a7b5442b423917-merged.mount: Deactivated successfully. Feb 23 02:38:55 localhost podman[31270]: Feb 23 02:38:55 localhost podman[31270]: 2026-02-23 07:38:55.563589071 +0000 UTC m=+0.071632753 container create 5b10a61f1cd5fc73b5542d184bd338fa08a73af9de946fd61937642ea4167c03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mclaren, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-type=git) Feb 23 02:38:55 localhost systemd[1]: Started libpod-conmon-5b10a61f1cd5fc73b5542d184bd338fa08a73af9de946fd61937642ea4167c03.scope. Feb 23 02:38:55 localhost systemd[1]: Started libcrun container. Feb 23 02:38:55 localhost podman[31270]: 2026-02-23 07:38:55.631982557 +0000 UTC m=+0.140026239 container init 5b10a61f1cd5fc73b5542d184bd338fa08a73af9de946fd61937642ea4167c03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mclaren, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, release=1770267347, RELEASE=main, CEPH_POINT_RELEASE=) Feb 23 02:38:55 localhost podman[31270]: 2026-02-23 07:38:55.534708347 +0000 UTC m=+0.042752059 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:55 localhost podman[31270]: 2026-02-23 07:38:55.641585367 +0000 UTC m=+0.149629049 container start 5b10a61f1cd5fc73b5542d184bd338fa08a73af9de946fd61937642ea4167c03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mclaren, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, release=1770267347, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 23 02:38:55 localhost podman[31270]: 2026-02-23 07:38:55.641818212 +0000 UTC m=+0.149861894 container attach 5b10a61f1cd5fc73b5542d184bd338fa08a73af9de946fd61937642ea4167c03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mclaren, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, architecture=x86_64, name=rhceph, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7) Feb 23 02:38:55 localhost upbeat_mclaren[31286]: 167 167 Feb 23 02:38:55 localhost systemd[1]: libpod-5b10a61f1cd5fc73b5542d184bd338fa08a73af9de946fd61937642ea4167c03.scope: Deactivated successfully. Feb 23 02:38:55 localhost podman[31270]: 2026-02-23 07:38:55.644352054 +0000 UTC m=+0.152395786 container died 5b10a61f1cd5fc73b5542d184bd338fa08a73af9de946fd61937642ea4167c03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mclaren, io.buildah.version=1.42.2, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-type=git, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public) Feb 23 02:38:55 localhost podman[31291]: 2026-02-23 07:38:55.733514868 +0000 UTC m=+0.074878812 container remove 5b10a61f1cd5fc73b5542d184bd338fa08a73af9de946fd61937642ea4167c03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mclaren, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, version=7) Feb 23 02:38:55 localhost systemd[1]: libpod-conmon-5b10a61f1cd5fc73b5542d184bd338fa08a73af9de946fd61937642ea4167c03.scope: Deactivated successfully. Feb 23 02:38:56 localhost podman[31318]: Feb 23 02:38:56 localhost podman[31318]: 2026-02-23 07:38:56.051845932 +0000 UTC m=+0.073961079 container create 639d215807f8b679062eb31cdfc06a7134960c19afe7a74f0cda55e7fcd503c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate-test, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.42.2, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph) Feb 23 02:38:56 localhost systemd[1]: Started libpod-conmon-639d215807f8b679062eb31cdfc06a7134960c19afe7a74f0cda55e7fcd503c7.scope. Feb 23 02:38:56 localhost systemd[1]: Started libcrun container. Feb 23 02:38:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab4f6d92529ec1f21befc3ae3e0ee974e45e0f44aae06a00472aa2a4276c199/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:56 localhost podman[31318]: 2026-02-23 07:38:56.024206239 +0000 UTC m=+0.046321396 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab4f6d92529ec1f21befc3ae3e0ee974e45e0f44aae06a00472aa2a4276c199/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab4f6d92529ec1f21befc3ae3e0ee974e45e0f44aae06a00472aa2a4276c199/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab4f6d92529ec1f21befc3ae3e0ee974e45e0f44aae06a00472aa2a4276c199/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:56 localhost systemd[1]: var-lib-containers-storage-overlay-3084a5fd35cc1ed529bc0d1702a0dbe5ca07d839b61b96f9f1e8f698d52ce94a-merged.mount: Deactivated successfully. Feb 23 02:38:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab4f6d92529ec1f21befc3ae3e0ee974e45e0f44aae06a00472aa2a4276c199/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:56 localhost podman[31318]: 2026-02-23 07:38:56.171538881 +0000 UTC m=+0.193654028 container init 639d215807f8b679062eb31cdfc06a7134960c19afe7a74f0cda55e7fcd503c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate-test, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z) Feb 23 02:38:56 localhost podman[31318]: 2026-02-23 07:38:56.182214328 +0000 UTC m=+0.204329485 container start 639d215807f8b679062eb31cdfc06a7134960c19afe7a74f0cda55e7fcd503c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate-test, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.42.2, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, vendor=Red Hat, Inc., RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347) Feb 23 02:38:56 localhost podman[31318]: 2026-02-23 07:38:56.182558236 +0000 UTC m=+0.204673413 container attach 639d215807f8b679062eb31cdfc06a7134960c19afe7a74f0cda55e7fcd503c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate-test, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.buildah.version=1.42.2, distribution-scope=public, ceph=True, version=7, io.openshift.tags=rhceph ceph, name=rhceph) Feb 23 02:38:56 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate-test[31333]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Feb 23 02:38:56 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate-test[31333]: [--no-systemd] [--no-tmpfs] Feb 23 02:38:56 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate-test[31333]: ceph-volume activate: error: unrecognized arguments: --bad-option Feb 23 02:38:56 localhost systemd[1]: libpod-639d215807f8b679062eb31cdfc06a7134960c19afe7a74f0cda55e7fcd503c7.scope: Deactivated successfully. Feb 23 02:38:56 localhost podman[31318]: 2026-02-23 07:38:56.396998563 +0000 UTC m=+0.419113720 container died 639d215807f8b679062eb31cdfc06a7134960c19afe7a74f0cda55e7fcd503c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate-test, RELEASE=main, build-date=2026-02-09T10:25:24Z, vcs-type=git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, release=1770267347) Feb 23 02:38:56 localhost systemd[1]: var-lib-containers-storage-overlay-aab4f6d92529ec1f21befc3ae3e0ee974e45e0f44aae06a00472aa2a4276c199-merged.mount: Deactivated successfully. Feb 23 02:38:56 localhost systemd-journald[618]: Field hash table of /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Feb 23 02:38:56 localhost systemd-journald[618]: /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 02:38:56 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 02:38:56 localhost podman[31338]: 2026-02-23 07:38:56.496522396 +0000 UTC m=+0.086359358 container remove 639d215807f8b679062eb31cdfc06a7134960c19afe7a74f0cda55e7fcd503c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate-test, vcs-type=git, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1770267347, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 02:38:56 localhost systemd[1]: libpod-conmon-639d215807f8b679062eb31cdfc06a7134960c19afe7a74f0cda55e7fcd503c7.scope: Deactivated successfully. Feb 23 02:38:56 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 02:38:56 localhost systemd[1]: Reloading. Feb 23 02:38:56 localhost systemd-sysv-generator[31398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:38:56 localhost systemd-rc-local-generator[31393]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:38:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:38:57 localhost systemd[1]: Reloading. Feb 23 02:38:57 localhost systemd-rc-local-generator[31438]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:38:57 localhost systemd-sysv-generator[31442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:38:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:38:57 localhost systemd[1]: Starting Ceph osd.0 for f1fea371-cb69-578d-a3d0-b5c472a84b46... Feb 23 02:38:57 localhost podman[31500]: Feb 23 02:38:57 localhost podman[31500]: 2026-02-23 07:38:57.648856627 +0000 UTC m=+0.073034087 container create 25103ae9296a8f40431bae2b4a74e0ec50d1f33fb4cce91efad419f052c42fe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate, name=rhceph, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, release=1770267347) Feb 23 02:38:57 localhost systemd[1]: Started libcrun container. Feb 23 02:38:57 localhost podman[31500]: 2026-02-23 07:38:57.617508534 +0000 UTC m=+0.041686044 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50049059e958bf5e8137be51092df0eb1a022fc8a5b13202b710754609d5f43f/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50049059e958bf5e8137be51092df0eb1a022fc8a5b13202b710754609d5f43f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50049059e958bf5e8137be51092df0eb1a022fc8a5b13202b710754609d5f43f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50049059e958bf5e8137be51092df0eb1a022fc8a5b13202b710754609d5f43f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50049059e958bf5e8137be51092df0eb1a022fc8a5b13202b710754609d5f43f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:57 localhost podman[31500]: 2026-02-23 07:38:57.772601764 +0000 UTC m=+0.196779224 container init 25103ae9296a8f40431bae2b4a74e0ec50d1f33fb4cce91efad419f052c42fe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, version=7, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 02:38:57 localhost podman[31500]: 2026-02-23 07:38:57.780765091 +0000 UTC m=+0.204942551 container start 25103ae9296a8f40431bae2b4a74e0ec50d1f33fb4cce91efad419f052c42fe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=) Feb 23 02:38:57 localhost podman[31500]: 2026-02-23 07:38:57.780967975 +0000 UTC m=+0.205145445 container attach 25103ae9296a8f40431bae2b4a74e0ec50d1f33fb4cce91efad419f052c42fe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=) Feb 23 02:38:58 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate[31514]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Feb 23 02:38:58 localhost bash[31500]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Feb 23 02:38:58 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate[31514]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Feb 23 02:38:58 localhost bash[31500]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Feb 23 02:38:58 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate[31514]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Feb 23 02:38:58 localhost bash[31500]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Feb 23 02:38:58 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate[31514]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 23 02:38:58 localhost bash[31500]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 23 02:38:58 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate[31514]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block Feb 23 02:38:58 localhost bash[31500]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block Feb 23 02:38:58 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate[31514]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Feb 23 02:38:58 localhost bash[31500]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Feb 23 02:38:58 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate[31514]: --> ceph-volume raw activate successful for osd ID: 0 Feb 23 02:38:58 localhost bash[31500]: --> ceph-volume raw activate successful for osd ID: 0 Feb 23 02:38:58 localhost systemd[1]: libpod-25103ae9296a8f40431bae2b4a74e0ec50d1f33fb4cce91efad419f052c42fe5.scope: Deactivated successfully. Feb 23 02:38:58 localhost podman[31500]: 2026-02-23 07:38:58.459081553 +0000 UTC m=+0.883259023 container died 25103ae9296a8f40431bae2b4a74e0ec50d1f33fb4cce91efad419f052c42fe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , RELEASE=main, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git) Feb 23 02:38:58 localhost systemd[1]: tmp-crun.UU5Lj9.mount: Deactivated successfully. Feb 23 02:38:58 localhost podman[31634]: 2026-02-23 07:38:58.553062482 +0000 UTC m=+0.084450022 container remove 25103ae9296a8f40431bae2b4a74e0ec50d1f33fb4cce91efad419f052c42fe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0-activate, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, release=1770267347, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Feb 23 02:38:58 localhost systemd[1]: var-lib-containers-storage-overlay-50049059e958bf5e8137be51092df0eb1a022fc8a5b13202b710754609d5f43f-merged.mount: Deactivated successfully. Feb 23 02:38:58 localhost podman[31691]: Feb 23 02:38:58 localhost podman[31691]: 2026-02-23 07:38:58.867804392 +0000 UTC m=+0.071847690 container create 79a5d399642d8c48515d1ea2cf74e1aae67b4deabb808547ea4e045aff73a214 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, release=1770267347, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, architecture=x86_64, version=7, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Feb 23 02:38:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67f04dc576f133892cc9d1261f048da4e5ff15c5f1db0a8c991cfc2ccf92981e/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67f04dc576f133892cc9d1261f048da4e5ff15c5f1db0a8c991cfc2ccf92981e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:58 localhost podman[31691]: 2026-02-23 07:38:58.839707506 +0000 UTC m=+0.043750814 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67f04dc576f133892cc9d1261f048da4e5ff15c5f1db0a8c991cfc2ccf92981e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67f04dc576f133892cc9d1261f048da4e5ff15c5f1db0a8c991cfc2ccf92981e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67f04dc576f133892cc9d1261f048da4e5ff15c5f1db0a8c991cfc2ccf92981e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Feb 23 02:38:58 localhost podman[31691]: 2026-02-23 07:38:58.981778822 +0000 UTC m=+0.185822120 container init 79a5d399642d8c48515d1ea2cf74e1aae67b4deabb808547ea4e045aff73a214 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0, name=rhceph, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, release=1770267347, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 02:38:58 localhost podman[31691]: 2026-02-23 07:38:58.992001338 +0000 UTC m=+0.196044636 container start 79a5d399642d8c48515d1ea2cf74e1aae67b4deabb808547ea4e045aff73a214 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, release=1770267347, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 02:38:58 localhost bash[31691]: 79a5d399642d8c48515d1ea2cf74e1aae67b4deabb808547ea4e045aff73a214 Feb 23 02:38:58 localhost systemd[1]: Started Ceph osd.0 for f1fea371-cb69-578d-a3d0-b5c472a84b46. Feb 23 02:38:59 localhost ceph-osd[31709]: set uid:gid to 167:167 (ceph:ceph) Feb 23 02:38:59 localhost ceph-osd[31709]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-osd, pid 2 Feb 23 02:38:59 localhost ceph-osd[31709]: pidfile_write: ignore empty --pid-file Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:38:59 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:38:59 localhost ceph-osd[31709]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) close Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) close Feb 23 02:38:59 localhost ceph-osd[31709]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal Feb 23 02:38:59 localhost ceph-osd[31709]: load: jerasure load: lrc Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:38:59 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) close Feb 23 02:38:59 localhost podman[31800]: Feb 23 02:38:59 localhost podman[31800]: 2026-02-23 07:38:59.828075123 +0000 UTC m=+0.072004771 container create 2a6342549a901557cc53b10cc5b820e498b96cc5294c24248d3cf7fccd49ad4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_ramanujan, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main) Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:38:59 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 23 02:38:59 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) close Feb 23 02:38:59 localhost systemd[1]: Started libpod-conmon-2a6342549a901557cc53b10cc5b820e498b96cc5294c24248d3cf7fccd49ad4e.scope. Feb 23 02:38:59 localhost podman[31800]: 2026-02-23 07:38:59.80170636 +0000 UTC m=+0.045636008 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:38:59 localhost systemd[1]: Started libcrun container. Feb 23 02:38:59 localhost podman[31800]: 2026-02-23 07:38:59.922962895 +0000 UTC m=+0.166892543 container init 2a6342549a901557cc53b10cc5b820e498b96cc5294c24248d3cf7fccd49ad4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_ramanujan, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Feb 23 02:38:59 localhost systemd[1]: tmp-crun.T2vBbI.mount: Deactivated successfully. Feb 23 02:38:59 localhost podman[31800]: 2026-02-23 07:38:59.934241627 +0000 UTC m=+0.178171275 container start 2a6342549a901557cc53b10cc5b820e498b96cc5294c24248d3cf7fccd49ad4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_ramanujan, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64) Feb 23 02:38:59 localhost podman[31800]: 2026-02-23 07:38:59.934529033 +0000 UTC m=+0.178458691 container attach 2a6342549a901557cc53b10cc5b820e498b96cc5294c24248d3cf7fccd49ad4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_ramanujan, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=) Feb 23 02:38:59 localhost ecstatic_ramanujan[31819]: 167 167 Feb 23 02:38:59 localhost systemd[1]: libpod-2a6342549a901557cc53b10cc5b820e498b96cc5294c24248d3cf7fccd49ad4e.scope: Deactivated successfully. Feb 23 02:38:59 localhost podman[31800]: 2026-02-23 07:38:59.939319499 +0000 UTC m=+0.183249187 container died 2a6342549a901557cc53b10cc5b820e498b96cc5294c24248d3cf7fccd49ad4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_ramanujan, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, version=7, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., name=rhceph, ceph=True, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 02:39:00 localhost podman[31824]: 2026-02-23 07:39:00.039053748 +0000 UTC m=+0.083999881 container remove 2a6342549a901557cc53b10cc5b820e498b96cc5294c24248d3cf7fccd49ad4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_ramanujan, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, RELEASE=main, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 02:39:00 localhost systemd[1]: libpod-conmon-2a6342549a901557cc53b10cc5b820e498b96cc5294c24248d3cf7fccd49ad4e.scope: Deactivated successfully. Feb 23 02:39:00 localhost ceph-osd[31709]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Feb 23 02:39:00 localhost ceph-osd[31709]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Feb 23 02:39:00 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Feb 23 02:39:00 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Feb 23 02:39:00 localhost ceph-osd[31709]: bdev(0x55907f836e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 23 02:39:00 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Feb 23 02:39:00 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Feb 23 02:39:00 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:39:00 localhost ceph-osd[31709]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Feb 23 02:39:00 localhost ceph-osd[31709]: bluefs mount Feb 23 02:39:00 localhost ceph-osd[31709]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 23 02:39:00 localhost ceph-osd[31709]: bluefs mount shared_bdev_used = 0 Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: RocksDB version: 7.9.2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Git sha 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: DB SUMMARY Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: DB Session ID: T2EQE2PDOXO8A319XMQX Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: CURRENT file: CURRENT Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: IDENTITY file: IDENTITY Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.error_if_exists: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.create_if_missing: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.env: 0x55907facacb0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.fs: LegacyFileSystem Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.info_log: 0x5590807c2d40 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_file_opening_threads: 16 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.statistics: (nil) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.use_fsync: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_log_file_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.log_file_time_to_roll: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.keep_log_file_num: 1000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.recycle_log_file_num: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_fallocate: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_mmap_reads: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_mmap_writes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.use_direct_reads: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.create_missing_column_families: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.db_log_dir: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.wal_dir: db.wal Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_cache_numshardbits: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.advise_random_on_open: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.db_write_buffer_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_manager: 0x55907f820140 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.use_adaptive_mutex: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.rate_limiter: (nil) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.wal_recovery_mode: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_thread_tracking: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_pipelined_write: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.unordered_write: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.row_cache: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.wal_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_ingest_behind: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.two_write_queues: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.manual_wal_flush: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.wal_compression: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.atomic_flush: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.persist_stats_to_disk: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.log_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.best_efforts_recovery: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_data_in_errors: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.db_host_id: __hostname__ Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enforce_single_del_contracts: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_background_jobs: 4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_background_compactions: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_subcompactions: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.delayed_write_rate : 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.stats_dump_period_sec: 600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.stats_persist_period_sec: 600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_open_files: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bytes_per_sync: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_background_flushes: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Compression algorithms supported: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kZSTD supported: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kXpressCompression supported: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kBZip2Compression supported: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kLZ4Compression supported: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kZlibCompression supported: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kSnappyCompression supported: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: DMutex implementation: pthread_mutex_t Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590807c2f00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590807c2f00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590807c2f00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590807c2f00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590807c2f00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590807c2f00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590807c2f00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590807c3120)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590807c3120)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590807c3120)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 26cf5e9a-b55f-47f4-8a28-469c4f59adf6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832340159769, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832340160021, "job": 1, "event": "recovery_finished"} Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025 Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240 Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000 Feb 23 02:39:00 localhost ceph-osd[31709]: freelist init Feb 23 02:39:00 localhost ceph-osd[31709]: freelist _read_cfg Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Feb 23 02:39:00 localhost ceph-osd[31709]: bluefs umount Feb 23 02:39:00 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) close Feb 23 02:39:00 localhost podman[32046]: Feb 23 02:39:00 localhost podman[32046]: 2026-02-23 07:39:00.370207721 +0000 UTC m=+0.074279538 container create 11b327202beb09185c2b0281ae9c4f930575dae9bb34069516ace8c642682d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate-test, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., version=7, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 02:39:00 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Feb 23 02:39:00 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Feb 23 02:39:00 localhost ceph-osd[31709]: bdev(0x55907f837180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:39:00 localhost ceph-osd[31709]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Feb 23 02:39:00 localhost ceph-osd[31709]: bluefs mount Feb 23 02:39:00 localhost ceph-osd[31709]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 23 02:39:00 localhost ceph-osd[31709]: bluefs mount shared_bdev_used = 4718592 Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: RocksDB version: 7.9.2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Git sha 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: DB SUMMARY Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: DB Session ID: T2EQE2PDOXO8A319XMQW Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: CURRENT file: CURRENT Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: IDENTITY file: IDENTITY Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.error_if_exists: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.create_if_missing: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.env: 0x55907faca8c0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.fs: LegacyFileSystem Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.info_log: 0x559080805c40 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_file_opening_threads: 16 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.statistics: (nil) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.use_fsync: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_log_file_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.log_file_time_to_roll: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.keep_log_file_num: 1000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.recycle_log_file_num: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_fallocate: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_mmap_reads: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_mmap_writes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.use_direct_reads: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.create_missing_column_families: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.db_log_dir: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.wal_dir: db.wal Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_cache_numshardbits: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.advise_random_on_open: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.db_write_buffer_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_manager: 0x55907f820140 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.use_adaptive_mutex: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.rate_limiter: (nil) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.wal_recovery_mode: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_thread_tracking: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_pipelined_write: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.unordered_write: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.row_cache: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.wal_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_ingest_behind: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.two_write_queues: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.manual_wal_flush: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.wal_compression: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.atomic_flush: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.persist_stats_to_disk: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.log_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.best_efforts_recovery: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.allow_data_in_errors: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.db_host_id: __hostname__ Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enforce_single_del_contracts: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_background_jobs: 4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_background_compactions: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_subcompactions: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.delayed_write_rate : 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.stats_dump_period_sec: 600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.stats_persist_period_sec: 600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_open_files: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bytes_per_sync: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_background_flushes: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Compression algorithms supported: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kZSTD supported: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kXpressCompression supported: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kBZip2Compression supported: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kLZ4Compression supported: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kZlibCompression supported: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: #011kSnappyCompression supported: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: DMutex implementation: pthread_mutex_t Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559080804880)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost systemd[1]: Started libpod-conmon-11b327202beb09185c2b0281ae9c4f930575dae9bb34069516ace8c642682d18.scope. Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559080804880)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559080804880)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559080804880)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559080804880)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost podman[32046]: 2026-02-23 07:39:00.34023789 +0000 UTC m=+0.044309717 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559080804880)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559080804880)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80e2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559080804ac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80f610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559080804ac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80f610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.merge_operator: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559080804ac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55907f80f610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression: LZ4 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.num_levels: 7 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 26cf5e9a-b55f-47f4-8a28-469c4f59adf6 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832340420159, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832340430486, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771832340, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "26cf5e9a-b55f-47f4-8a28-469c4f59adf6", "db_session_id": "T2EQE2PDOXO8A319XMQW", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832340434771, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771832340, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "26cf5e9a-b55f-47f4-8a28-469c4f59adf6", "db_session_id": "T2EQE2PDOXO8A319XMQW", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832340438830, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771832340, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "26cf5e9a-b55f-47f4-8a28-469c4f59adf6", "db_session_id": "T2EQE2PDOXO8A319XMQW", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832340443425, "job": 1, "event": "recovery_finished"} Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Feb 23 02:39:00 localhost systemd[1]: Started libcrun container. Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55907f878700 Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: DB pointer 0x559080721a00 Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4 Feb 23 02:39:00 localhost ceph-osd[31709]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 02:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55907f80e2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55907f80e2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55907f80e2d0#2 capacity: 460.80 MB usag Feb 23 02:39:00 localhost ceph-osd[31709]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Feb 23 02:39:00 localhost ceph-osd[31709]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Feb 23 02:39:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f703ce43c4113f408c051d486ce3610f26c5a95a4398bac801b0921264cf33e/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:00 localhost ceph-osd[31709]: _get_class not permitted to load lua Feb 23 02:39:00 localhost ceph-osd[31709]: _get_class not permitted to load sdk Feb 23 02:39:00 localhost ceph-osd[31709]: _get_class not permitted to load test_remote_reads Feb 23 02:39:00 localhost ceph-osd[31709]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients Feb 23 02:39:00 localhost ceph-osd[31709]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Feb 23 02:39:00 localhost ceph-osd[31709]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds Feb 23 02:39:00 localhost ceph-osd[31709]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Feb 23 02:39:00 localhost ceph-osd[31709]: osd.0 0 load_pgs Feb 23 02:39:00 localhost ceph-osd[31709]: osd.0 0 load_pgs opened 0 pgs Feb 23 02:39:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f703ce43c4113f408c051d486ce3610f26c5a95a4398bac801b0921264cf33e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:00 localhost ceph-osd[31709]: osd.0 0 log_to_monitors true Feb 23 02:39:00 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0[31705]: 2026-02-23T07:39:00.492+0000 7fcc1fb4ba80 -1 osd.0 0 log_to_monitors true Feb 23 02:39:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f703ce43c4113f408c051d486ce3610f26c5a95a4398bac801b0921264cf33e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f703ce43c4113f408c051d486ce3610f26c5a95a4398bac801b0921264cf33e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f703ce43c4113f408c051d486ce3610f26c5a95a4398bac801b0921264cf33e/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:00 localhost podman[32046]: 2026-02-23 07:39:00.533244552 +0000 UTC m=+0.237316379 container init 11b327202beb09185c2b0281ae9c4f930575dae9bb34069516ace8c642682d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate-test, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64) Feb 23 02:39:00 localhost podman[32046]: 2026-02-23 07:39:00.544309528 +0000 UTC m=+0.248381345 container start 11b327202beb09185c2b0281ae9c4f930575dae9bb34069516ace8c642682d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate-test, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, version=7, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-type=git, name=rhceph, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.buildah.version=1.42.2, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 02:39:00 localhost podman[32046]: 2026-02-23 07:39:00.545380433 +0000 UTC m=+0.249452250 container attach 11b327202beb09185c2b0281ae9c4f930575dae9bb34069516ace8c642682d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate-test, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public) Feb 23 02:39:00 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate-test[32229]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Feb 23 02:39:00 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate-test[32229]: [--no-systemd] [--no-tmpfs] Feb 23 02:39:00 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate-test[32229]: ceph-volume activate: error: unrecognized arguments: --bad-option Feb 23 02:39:00 localhost systemd[1]: libpod-11b327202beb09185c2b0281ae9c4f930575dae9bb34069516ace8c642682d18.scope: Deactivated successfully. Feb 23 02:39:00 localhost podman[32046]: 2026-02-23 07:39:00.765549938 +0000 UTC m=+0.469621795 container died 11b327202beb09185c2b0281ae9c4f930575dae9bb34069516ace8c642682d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate-test, architecture=x86_64, release=1770267347, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux , vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 02:39:00 localhost systemd[1]: tmp-crun.4E2HF2.mount: Deactivated successfully. Feb 23 02:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-93c66a0a6ce45f0f2dd183f7674011ad59ff98779a0365a104294ccd6c945053-merged.mount: Deactivated successfully. Feb 23 02:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-6f703ce43c4113f408c051d486ce3610f26c5a95a4398bac801b0921264cf33e-merged.mount: Deactivated successfully. Feb 23 02:39:00 localhost podman[32282]: 2026-02-23 07:39:00.865141364 +0000 UTC m=+0.090752184 container remove 11b327202beb09185c2b0281ae9c4f930575dae9bb34069516ace8c642682d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate-test, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, release=1770267347, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, ceph=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main) Feb 23 02:39:00 localhost systemd[1]: libpod-conmon-11b327202beb09185c2b0281ae9c4f930575dae9bb34069516ace8c642682d18.scope: Deactivated successfully. Feb 23 02:39:01 localhost systemd[1]: Reloading. Feb 23 02:39:01 localhost systemd-sysv-generator[32337]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:39:01 localhost systemd-rc-local-generator[32334]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:39:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:39:01 localhost systemd[1]: Reloading. Feb 23 02:39:01 localhost systemd-rc-local-generator[32379]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:39:01 localhost systemd-sysv-generator[32385]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:39:01 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Feb 23 02:39:01 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Feb 23 02:39:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:39:01 localhost systemd[1]: Starting Ceph osd.3 for f1fea371-cb69-578d-a3d0-b5c472a84b46... Feb 23 02:39:02 localhost podman[32442]: Feb 23 02:39:02 localhost podman[32442]: 2026-02-23 07:39:02.040938198 +0000 UTC m=+0.077848882 container create 4c99c6cb474b828fa18479a28a91e094a88d83d7b1572dc60af873ac1af09238 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate, release=1770267347, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 02:39:02 localhost systemd[1]: Started libcrun container. Feb 23 02:39:02 localhost podman[32442]: 2026-02-23 07:39:02.011602493 +0000 UTC m=+0.048513207 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:39:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ee488cbfafbadf6c6527e12c692aa43c5cedc7e5541c329abebddc9fb405bd/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ee488cbfafbadf6c6527e12c692aa43c5cedc7e5541c329abebddc9fb405bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ee488cbfafbadf6c6527e12c692aa43c5cedc7e5541c329abebddc9fb405bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ee488cbfafbadf6c6527e12c692aa43c5cedc7e5541c329abebddc9fb405bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ee488cbfafbadf6c6527e12c692aa43c5cedc7e5541c329abebddc9fb405bd/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:02 localhost podman[32442]: 2026-02-23 07:39:02.177412581 +0000 UTC m=+0.214323275 container init 4c99c6cb474b828fa18479a28a91e094a88d83d7b1572dc60af873ac1af09238 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate, release=1770267347, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 02:39:02 localhost ceph-osd[31709]: osd.0 0 done with init, starting boot process Feb 23 02:39:02 localhost ceph-osd[31709]: osd.0 0 start_boot Feb 23 02:39:02 localhost ceph-osd[31709]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1 Feb 23 02:39:02 localhost ceph-osd[31709]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Feb 23 02:39:02 localhost ceph-osd[31709]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Feb 23 02:39:02 localhost ceph-osd[31709]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Feb 23 02:39:02 localhost ceph-osd[31709]: osd.0 0 bench count 12288000 bsize 4 KiB Feb 23 02:39:02 localhost podman[32442]: 2026-02-23 07:39:02.187593806 +0000 UTC m=+0.224504500 container start 4c99c6cb474b828fa18479a28a91e094a88d83d7b1572dc60af873ac1af09238 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True) Feb 23 02:39:02 localhost systemd[1]: tmp-crun.eQ84OY.mount: Deactivated successfully. Feb 23 02:39:02 localhost podman[32442]: 2026-02-23 07:39:02.188005926 +0000 UTC m=+0.224916620 container attach 4c99c6cb474b828fa18479a28a91e094a88d83d7b1572dc60af873ac1af09238 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, version=7, architecture=x86_64, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 02:39:02 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate[32456]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Feb 23 02:39:02 localhost bash[32442]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Feb 23 02:39:02 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate[32456]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-3 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Feb 23 02:39:02 localhost bash[32442]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-3 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Feb 23 02:39:02 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate[32456]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Feb 23 02:39:02 localhost bash[32442]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Feb 23 02:39:02 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate[32456]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 23 02:39:02 localhost bash[32442]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 23 02:39:02 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate[32456]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-3/block Feb 23 02:39:02 localhost bash[32442]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-3/block Feb 23 02:39:02 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate[32456]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Feb 23 02:39:02 localhost bash[32442]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Feb 23 02:39:02 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate[32456]: --> ceph-volume raw activate successful for osd ID: 3 Feb 23 02:39:02 localhost bash[32442]: --> ceph-volume raw activate successful for osd ID: 3 Feb 23 02:39:02 localhost systemd[1]: libpod-4c99c6cb474b828fa18479a28a91e094a88d83d7b1572dc60af873ac1af09238.scope: Deactivated successfully. Feb 23 02:39:02 localhost podman[32442]: 2026-02-23 07:39:02.80487593 +0000 UTC m=+0.841786644 container died 4c99c6cb474b828fa18479a28a91e094a88d83d7b1572dc60af873ac1af09238 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , RELEASE=main, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Feb 23 02:39:02 localhost podman[32573]: 2026-02-23 07:39:02.908906771 +0000 UTC m=+0.091176843 container remove 4c99c6cb474b828fa18479a28a91e094a88d83d7b1572dc60af873ac1af09238 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3-activate, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, RELEASE=main, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc.) Feb 23 02:39:03 localhost systemd[1]: var-lib-containers-storage-overlay-d6ee488cbfafbadf6c6527e12c692aa43c5cedc7e5541c329abebddc9fb405bd-merged.mount: Deactivated successfully. Feb 23 02:39:03 localhost podman[32634]: Feb 23 02:39:03 localhost podman[32634]: 2026-02-23 07:39:03.214824298 +0000 UTC m=+0.065764562 container create a9849fe1d4f8df8d6e8dbbdff8f83053c4d39e5d4505d193716dbf668060565a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True) Feb 23 02:39:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d44dc6bc9464832d9d5fac5597a0d1adbd982cb6aa7facf6648d3bbd3cd7e1d/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:03 localhost podman[32634]: 2026-02-23 07:39:03.190450283 +0000 UTC m=+0.041390557 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:39:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d44dc6bc9464832d9d5fac5597a0d1adbd982cb6aa7facf6648d3bbd3cd7e1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d44dc6bc9464832d9d5fac5597a0d1adbd982cb6aa7facf6648d3bbd3cd7e1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d44dc6bc9464832d9d5fac5597a0d1adbd982cb6aa7facf6648d3bbd3cd7e1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d44dc6bc9464832d9d5fac5597a0d1adbd982cb6aa7facf6648d3bbd3cd7e1d/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:03 localhost podman[32634]: 2026-02-23 07:39:03.346569566 +0000 UTC m=+0.197509820 container init a9849fe1d4f8df8d6e8dbbdff8f83053c4d39e5d4505d193716dbf668060565a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3, name=rhceph, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 02:39:03 localhost podman[32634]: 2026-02-23 07:39:03.356145227 +0000 UTC m=+0.207085471 container start a9849fe1d4f8df8d6e8dbbdff8f83053c4d39e5d4505d193716dbf668060565a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, name=rhceph, release=1770267347, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 02:39:03 localhost bash[32634]: a9849fe1d4f8df8d6e8dbbdff8f83053c4d39e5d4505d193716dbf668060565a Feb 23 02:39:03 localhost systemd[1]: Started Ceph osd.3 for f1fea371-cb69-578d-a3d0-b5c472a84b46. Feb 23 02:39:03 localhost ceph-osd[32652]: set uid:gid to 167:167 (ceph:ceph) Feb 23 02:39:03 localhost ceph-osd[32652]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-osd, pid 2 Feb 23 02:39:03 localhost ceph-osd[32652]: pidfile_write: ignore empty --pid-file Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:39:03 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:39:03 localhost ceph-osd[32652]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) close Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) close Feb 23 02:39:03 localhost ceph-osd[32652]: starting osd.3 osd_data /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal Feb 23 02:39:03 localhost ceph-osd[32652]: load: jerasure load: lrc Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:39:03 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) close Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:39:03 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 23 02:39:03 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) close Feb 23 02:39:04 localhost systemd[1]: tmp-crun.aRJaZQ.mount: Deactivated successfully. Feb 23 02:39:04 localhost podman[32746]: Feb 23 02:39:04 localhost podman[32746]: 2026-02-23 07:39:04.114813642 +0000 UTC m=+0.076388529 container create 62adf31af182710bdf0b51c4f34eed35a865b619fba7049a8f522ea402a65727 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_ganguly, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 02:39:04 localhost systemd[1]: Started libpod-conmon-62adf31af182710bdf0b51c4f34eed35a865b619fba7049a8f522ea402a65727.scope. Feb 23 02:39:04 localhost podman[32746]: 2026-02-23 07:39:04.076165322 +0000 UTC m=+0.037740219 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:39:04 localhost systemd[1]: Started libcrun container. Feb 23 02:39:04 localhost podman[32746]: 2026-02-23 07:39:04.193570885 +0000 UTC m=+0.155145742 container init 62adf31af182710bdf0b51c4f34eed35a865b619fba7049a8f522ea402a65727 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_ganguly, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, distribution-scope=public, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True) Feb 23 02:39:04 localhost podman[32746]: 2026-02-23 07:39:04.207651004 +0000 UTC m=+0.169225881 container start 62adf31af182710bdf0b51c4f34eed35a865b619fba7049a8f522ea402a65727 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_ganguly, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, release=1770267347, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7) Feb 23 02:39:04 localhost podman[32746]: 2026-02-23 07:39:04.208026743 +0000 UTC m=+0.169601620 container attach 62adf31af182710bdf0b51c4f34eed35a865b619fba7049a8f522ea402a65727 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_ganguly, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, maintainer=Guillaume Abrioux , release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, com.redhat.component=rhceph-container) Feb 23 02:39:04 localhost sleepy_ganguly[32759]: 167 167 Feb 23 02:39:04 localhost systemd[1]: libpod-62adf31af182710bdf0b51c4f34eed35a865b619fba7049a8f522ea402a65727.scope: Deactivated successfully. Feb 23 02:39:04 localhost ceph-osd[32652]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Feb 23 02:39:04 localhost ceph-osd[32652]: osd.3:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Feb 23 02:39:04 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Feb 23 02:39:04 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Feb 23 02:39:04 localhost ceph-osd[32652]: bdev(0x562d239aee00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 23 02:39:04 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Feb 23 02:39:04 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Feb 23 02:39:04 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:39:04 localhost ceph-osd[32652]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Feb 23 02:39:04 localhost ceph-osd[32652]: bluefs mount Feb 23 02:39:04 localhost ceph-osd[32652]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 23 02:39:04 localhost ceph-osd[32652]: bluefs mount shared_bdev_used = 0 Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: RocksDB version: 7.9.2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Git sha 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: DB SUMMARY Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: DB Session ID: 86ANZCU0LXAA1KUKB2RL Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: CURRENT file: CURRENT Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: IDENTITY file: IDENTITY Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.error_if_exists: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.create_if_missing: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.env: 0x562d23c42cb0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.fs: LegacyFileSystem Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.info_log: 0x562d24940900 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_file_opening_threads: 16 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.statistics: (nil) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.use_fsync: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_log_file_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.log_file_time_to_roll: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.keep_log_file_num: 1000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.recycle_log_file_num: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_fallocate: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_mmap_reads: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_mmap_writes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.use_direct_reads: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.create_missing_column_families: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.db_log_dir: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.wal_dir: db.wal Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_cache_numshardbits: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.advise_random_on_open: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.db_write_buffer_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_manager: 0x562d23998140 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.use_adaptive_mutex: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.rate_limiter: (nil) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.wal_recovery_mode: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_thread_tracking: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_pipelined_write: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.unordered_write: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.row_cache: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.wal_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_ingest_behind: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.two_write_queues: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.manual_wal_flush: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.wal_compression: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.atomic_flush: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.persist_stats_to_disk: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.log_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.best_efforts_recovery: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_data_in_errors: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.db_host_id: __hostname__ Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enforce_single_del_contracts: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_background_jobs: 4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_background_compactions: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_subcompactions: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.delayed_write_rate : 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.stats_dump_period_sec: 600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.stats_persist_period_sec: 600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_open_files: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bytes_per_sync: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_background_flushes: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Compression algorithms supported: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kZSTD supported: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kXpressCompression supported: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kBZip2Compression supported: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kLZ4Compression supported: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kZlibCompression supported: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kSnappyCompression supported: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: DMutex implementation: pthread_mutex_t Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24940ac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d23986850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24940ac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d23986850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24940ac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d23986850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24940ac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d23986850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24940ac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d23986850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24940ac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d23986850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24940ac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d23986850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24940ce0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d239862d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24940ce0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d239862d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost podman[32764]: 2026-02-23 07:39:04.273734833 +0000 UTC m=+0.052088023 container died 62adf31af182710bdf0b51c4f34eed35a865b619fba7049a8f522ea402a65727 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_ganguly, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, maintainer=Guillaume Abrioux , vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, ceph=True, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24940ce0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d239862d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 53b8225b-bd0b-4036-b85c-b4b12165a5a3 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832344276618, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832344276825, "job": 1, "event": "recovery_finished"} Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta old nid_max 1025 Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta old blobid_max 10240 Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta min_alloc_size 0x1000 Feb 23 02:39:04 localhost ceph-osd[32652]: freelist init Feb 23 02:39:04 localhost ceph-osd[32652]: freelist _read_cfg Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Feb 23 02:39:04 localhost ceph-osd[32652]: bluefs umount Feb 23 02:39:04 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) close Feb 23 02:39:04 localhost podman[32764]: 2026-02-23 07:39:04.319711739 +0000 UTC m=+0.098064909 container remove 62adf31af182710bdf0b51c4f34eed35a865b619fba7049a8f522ea402a65727 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_ganguly, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, RELEASE=main, version=7, io.openshift.expose-services=, release=1770267347, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 02:39:04 localhost systemd[1]: libpod-conmon-62adf31af182710bdf0b51c4f34eed35a865b619fba7049a8f522ea402a65727.scope: Deactivated successfully. Feb 23 02:39:04 localhost ceph-osd[31709]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.561 iops: 7567.673 elapsed_sec: 0.396 Feb 23 02:39:04 localhost ceph-osd[31709]: log_channel(cluster) log [WRN] : OSD bench result of 7567.672931 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Feb 23 02:39:04 localhost ceph-osd[31709]: osd.0 0 waiting for initial osdmap Feb 23 02:39:04 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0[31705]: 2026-02-23T07:39:04.417+0000 7fcc1baca640 -1 osd.0 0 waiting for initial osdmap Feb 23 02:39:04 localhost ceph-osd[31709]: osd.0 11 crush map has features 288514050185494528, adjusting msgr requires for clients Feb 23 02:39:04 localhost ceph-osd[31709]: osd.0 11 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Feb 23 02:39:04 localhost ceph-osd[31709]: osd.0 11 crush map has features 3314932999778484224, adjusting msgr requires for osds Feb 23 02:39:04 localhost ceph-osd[31709]: osd.0 11 check_osdmap_features require_osd_release unknown -> reef Feb 23 02:39:04 localhost ceph-osd[31709]: osd.0 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 23 02:39:04 localhost ceph-osd[31709]: osd.0 11 set_numa_affinity not setting numa affinity Feb 23 02:39:04 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-0[31705]: 2026-02-23T07:39:04.432+0000 7fcc170f4640 -1 osd.0 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 23 02:39:04 localhost ceph-osd[31709]: osd.0 11 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Feb 23 02:39:04 localhost podman[32980]: Feb 23 02:39:04 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Feb 23 02:39:04 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Feb 23 02:39:04 localhost ceph-osd[32652]: bdev(0x562d239af180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 23 02:39:04 localhost ceph-osd[32652]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Feb 23 02:39:04 localhost ceph-osd[32652]: bluefs mount Feb 23 02:39:04 localhost ceph-osd[32652]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 23 02:39:04 localhost ceph-osd[32652]: bluefs mount shared_bdev_used = 4718592 Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: RocksDB version: 7.9.2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Git sha 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: DB SUMMARY Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: DB Session ID: 86ANZCU0LXAA1KUKB2RK Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: CURRENT file: CURRENT Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: IDENTITY file: IDENTITY Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.error_if_exists: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.create_if_missing: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.env: 0x562d239ea310 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.fs: LegacyFileSystem Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.info_log: 0x562d23a4f320 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_file_opening_threads: 16 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.statistics: (nil) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.use_fsync: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_log_file_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.log_file_time_to_roll: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.keep_log_file_num: 1000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.recycle_log_file_num: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_fallocate: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_mmap_reads: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_mmap_writes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.use_direct_reads: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.create_missing_column_families: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.db_log_dir: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.wal_dir: db.wal Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_cache_numshardbits: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.advise_random_on_open: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.db_write_buffer_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_manager: 0x562d23999540 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.use_adaptive_mutex: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.rate_limiter: (nil) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.wal_recovery_mode: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_thread_tracking: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_pipelined_write: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.unordered_write: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.row_cache: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.wal_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_ingest_behind: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.two_write_queues: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.manual_wal_flush: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.wal_compression: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.atomic_flush: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.persist_stats_to_disk: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.log_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.best_efforts_recovery: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.allow_data_in_errors: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.db_host_id: __hostname__ Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enforce_single_del_contracts: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_background_jobs: 4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_background_compactions: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_subcompactions: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.delayed_write_rate : 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.stats_dump_period_sec: 600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.stats_persist_period_sec: 600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_open_files: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bytes_per_sync: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_background_flushes: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Compression algorithms supported: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kZSTD supported: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kXpressCompression supported: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kBZip2Compression supported: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kLZ4Compression supported: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kZlibCompression supported: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: #011kSnappyCompression supported: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: DMutex implementation: pthread_mutex_t Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24941d60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d239862d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24941d60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d239862d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24941d60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d239862d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24941d60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d239862d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost podman[32980]: 2026-02-23 07:39:04.536794649 +0000 UTC m=+0.070078846 container create f135942bc1c2488ed5cae4c5e80de398765191806d700c760fbffb63ab0ed53b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_hofstadter, GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_CLEAN=True, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, distribution-scope=public, architecture=x86_64, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24941d60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d239862d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24941d60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d239862d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d24941d60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d239862d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d23a4efa0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d23987610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d23a4efa0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d23987610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.merge_operator: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_filter_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.sst_partitioner_factory: None Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d23a4efa0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562d23987610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.write_buffer_size: 16777216 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number: 64 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression: LZ4 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression: Disabled Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.num_levels: 7 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.level: 32767 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.enabled: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.arena_block_size: 1048576 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_support: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.bloom_locality: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.max_successive_merges: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.force_consistency_checks: 1 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.ttl: 2592000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_files: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.min_blob_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_size: 268435456 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 53b8225b-bd0b-4036-b85c-b4b12165a5a3 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832344545114, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832344551396, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771832344, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53b8225b-bd0b-4036-b85c-b4b12165a5a3", "db_session_id": "86ANZCU0LXAA1KUKB2RK", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832344555834, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 466, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771832344, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53b8225b-bd0b-4036-b85c-b4b12165a5a3", "db_session_id": "86ANZCU0LXAA1KUKB2RK", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832344560554, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771832344, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53b8225b-bd0b-4036-b85c-b4b12165a5a3", "db_session_id": "86ANZCU0LXAA1KUKB2RK", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771832344565227, "job": 1, "event": "recovery_finished"} Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Feb 23 02:39:04 localhost systemd[1]: Started libpod-conmon-f135942bc1c2488ed5cae4c5e80de398765191806d700c760fbffb63ab0ed53b.scope. Feb 23 02:39:04 localhost systemd[1]: Started libcrun container. Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562d249ea380 Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: DB pointer 0x562d2489ba00 Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _upgrade_super from 4, latest 4 Feb 23 02:39:04 localhost ceph-osd[32652]: bluestore(/var/lib/ceph/osd/ceph-3) _upgrade_super done Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 02:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d239862d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d239862d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d239862d0#2 capacity: 460.80 MB usage: 0 Feb 23 02:39:04 localhost ceph-osd[32652]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Feb 23 02:39:04 localhost ceph-osd[32652]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Feb 23 02:39:04 localhost podman[32980]: 2026-02-23 07:39:04.498597291 +0000 UTC m=+0.031881538 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:39:04 localhost ceph-osd[32652]: _get_class not permitted to load lua Feb 23 02:39:04 localhost ceph-osd[32652]: _get_class not permitted to load sdk Feb 23 02:39:04 localhost ceph-osd[32652]: _get_class not permitted to load test_remote_reads Feb 23 02:39:04 localhost ceph-osd[32652]: osd.3 0 crush map has features 288232575208783872, adjusting msgr requires for clients Feb 23 02:39:04 localhost ceph-osd[32652]: osd.3 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Feb 23 02:39:04 localhost ceph-osd[32652]: osd.3 0 crush map has features 288232575208783872, adjusting msgr requires for osds Feb 23 02:39:04 localhost ceph-osd[32652]: osd.3 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Feb 23 02:39:04 localhost ceph-osd[32652]: osd.3 0 load_pgs Feb 23 02:39:04 localhost ceph-osd[32652]: osd.3 0 load_pgs opened 0 pgs Feb 23 02:39:04 localhost ceph-osd[32652]: osd.3 0 log_to_monitors true Feb 23 02:39:04 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3[32648]: 2026-02-23T07:39:04.604+0000 7f02e17dca80 -1 osd.3 0 log_to_monitors true Feb 23 02:39:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aff234368de9f73cf09ee8a41e6fd794aaf621bfa4caaa8f1b68d371e6562ff/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aff234368de9f73cf09ee8a41e6fd794aaf621bfa4caaa8f1b68d371e6562ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aff234368de9f73cf09ee8a41e6fd794aaf621bfa4caaa8f1b68d371e6562ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:04 localhost podman[32980]: 2026-02-23 07:39:04.643436243 +0000 UTC m=+0.176720420 container init f135942bc1c2488ed5cae4c5e80de398765191806d700c760fbffb63ab0ed53b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_hofstadter, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main) Feb 23 02:39:04 localhost podman[32980]: 2026-02-23 07:39:04.654998761 +0000 UTC m=+0.188282948 container start f135942bc1c2488ed5cae4c5e80de398765191806d700c760fbffb63ab0ed53b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_hofstadter, distribution-scope=public, GIT_BRANCH=main, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.buildah.version=1.42.2, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 02:39:04 localhost podman[32980]: 2026-02-23 07:39:04.656829366 +0000 UTC m=+0.190113553 container attach f135942bc1c2488ed5cae4c5e80de398765191806d700c760fbffb63ab0ed53b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_hofstadter, release=1770267347, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph) Feb 23 02:39:05 localhost systemd[1]: var-lib-containers-storage-overlay-c0e8e0b91502f71628fd3e243dc4fca6efc1579237b103819e185b227f4120a1-merged.mount: Deactivated successfully. Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: { Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "297f7b49-6343-46a1-9f7d-f773f75868d6": { Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "ceph_fsid": "f1fea371-cb69-578d-a3d0-b5c472a84b46", Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "osd_id": 3, Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "osd_uuid": "297f7b49-6343-46a1-9f7d-f773f75868d6", Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "type": "bluestore" Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: }, Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "7c88b276-d2e3-48a7-91d4-30742e429227": { Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "ceph_fsid": "f1fea371-cb69-578d-a3d0-b5c472a84b46", Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "osd_id": 0, Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "osd_uuid": "7c88b276-d2e3-48a7-91d4-30742e429227", Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: "type": "bluestore" Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: } Feb 23 02:39:05 localhost suspicious_hofstadter[33177]: } Feb 23 02:39:05 localhost systemd[1]: libpod-f135942bc1c2488ed5cae4c5e80de398765191806d700c760fbffb63ab0ed53b.scope: Deactivated successfully. Feb 23 02:39:05 localhost podman[32980]: 2026-02-23 07:39:05.18678001 +0000 UTC m=+0.720064267 container died f135942bc1c2488ed5cae4c5e80de398765191806d700c760fbffb63ab0ed53b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_hofstadter, name=rhceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 02:39:05 localhost systemd[1]: tmp-crun.JFzsbW.mount: Deactivated successfully. Feb 23 02:39:05 localhost systemd[1]: var-lib-containers-storage-overlay-7aff234368de9f73cf09ee8a41e6fd794aaf621bfa4caaa8f1b68d371e6562ff-merged.mount: Deactivated successfully. Feb 23 02:39:05 localhost podman[33246]: 2026-02-23 07:39:05.28947436 +0000 UTC m=+0.087934696 container remove f135942bc1c2488ed5cae4c5e80de398765191806d700c760fbffb63ab0ed53b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_hofstadter, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_CLEAN=True, release=1770267347, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, name=rhceph, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, version=7) Feb 23 02:39:05 localhost systemd[1]: libpod-conmon-f135942bc1c2488ed5cae4c5e80de398765191806d700c760fbffb63ab0ed53b.scope: Deactivated successfully. Feb 23 02:39:05 localhost ceph-osd[31709]: osd.0 12 state: booting -> active Feb 23 02:39:05 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Feb 23 02:39:05 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Feb 23 02:39:06 localhost ceph-osd[32652]: osd.3 0 done with init, starting boot process Feb 23 02:39:06 localhost ceph-osd[32652]: osd.3 0 start_boot Feb 23 02:39:06 localhost ceph-osd[32652]: osd.3 0 maybe_override_options_for_qos osd_max_backfills set to 1 Feb 23 02:39:06 localhost ceph-osd[32652]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Feb 23 02:39:06 localhost ceph-osd[32652]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Feb 23 02:39:06 localhost ceph-osd[32652]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Feb 23 02:39:06 localhost ceph-osd[32652]: osd.3 0 bench count 12288000 bsize 4 KiB Feb 23 02:39:08 localhost podman[33372]: 2026-02-23 07:39:08.394195741 +0000 UTC m=+0.092229459 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=1770267347, RELEASE=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=) Feb 23 02:39:08 localhost podman[33372]: 2026-02-23 07:39:08.500194971 +0000 UTC m=+0.198228669 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.42.2, name=rhceph, release=1770267347, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, version=7) Feb 23 02:39:08 localhost ceph-osd[31709]: osd.0 15 crush map has features 288514051259236352, adjusting msgr requires for clients Feb 23 02:39:08 localhost ceph-osd[31709]: osd.0 15 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Feb 23 02:39:08 localhost ceph-osd[31709]: osd.0 15 crush map has features 3314933000852226048, adjusting msgr requires for osds Feb 23 02:39:08 localhost ceph-osd[32652]: osd.3 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.465 iops: 7799.071 elapsed_sec: 0.385 Feb 23 02:39:08 localhost ceph-osd[32652]: log_channel(cluster) log [WRN] : OSD bench result of 7799.070549 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Feb 23 02:39:08 localhost ceph-osd[32652]: osd.3 0 waiting for initial osdmap Feb 23 02:39:08 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3[32648]: 2026-02-23T07:39:08.751+0000 7f02dd75b640 -1 osd.3 0 waiting for initial osdmap Feb 23 02:39:08 localhost ceph-osd[32652]: osd.3 15 crush map has features 288514051259236352, adjusting msgr requires for clients Feb 23 02:39:08 localhost ceph-osd[32652]: osd.3 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Feb 23 02:39:08 localhost ceph-osd[32652]: osd.3 15 crush map has features 3314933000852226048, adjusting msgr requires for osds Feb 23 02:39:08 localhost ceph-osd[32652]: osd.3 15 check_osdmap_features require_osd_release unknown -> reef Feb 23 02:39:08 localhost ceph-osd[32652]: osd.3 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 23 02:39:08 localhost ceph-osd[32652]: osd.3 15 set_numa_affinity not setting numa affinity Feb 23 02:39:08 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-osd-3[32648]: 2026-02-23T07:39:08.780+0000 7f02d8d85640 -1 osd.3 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 23 02:39:08 localhost ceph-osd[32652]: osd.3 15 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Feb 23 02:39:09 localhost ceph-osd[32652]: osd.3 15 tick checking mon for new map Feb 23 02:39:09 localhost ceph-osd[32652]: osd.3 16 state: booting -> active Feb 23 02:39:09 localhost ceph-osd[32652]: osd.3 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=16) [3] r=0 lpr=16 pi=[15,16)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:39:10 localhost podman[33570]: Feb 23 02:39:10 localhost podman[33570]: 2026-02-23 07:39:10.422381715 +0000 UTC m=+0.056042998 container create b644ea949a18c12f056516f8c4a72cb67f7592508cd5db35ab69dcbd0715b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_torvalds, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-type=git, architecture=x86_64, ceph=True, io.buildah.version=1.42.2, GIT_BRANCH=main, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph) Feb 23 02:39:10 localhost systemd[26368]: Starting Mark boot as successful... Feb 23 02:39:10 localhost systemd[26368]: Finished Mark boot as successful. Feb 23 02:39:10 localhost systemd[1]: Started libpod-conmon-b644ea949a18c12f056516f8c4a72cb67f7592508cd5db35ab69dcbd0715b299.scope. Feb 23 02:39:10 localhost systemd[1]: Started libcrun container. Feb 23 02:39:10 localhost podman[33570]: 2026-02-23 07:39:10.392124657 +0000 UTC m=+0.025785980 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:39:10 localhost podman[33570]: 2026-02-23 07:39:10.502023791 +0000 UTC m=+0.135685104 container init b644ea949a18c12f056516f8c4a72cb67f7592508cd5db35ab69dcbd0715b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_torvalds, architecture=x86_64, release=1770267347, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_CLEAN=True, version=7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Feb 23 02:39:10 localhost podman[33570]: 2026-02-23 07:39:10.514336247 +0000 UTC m=+0.147997600 container start b644ea949a18c12f056516f8c4a72cb67f7592508cd5db35ab69dcbd0715b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_torvalds, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.42.2, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True) Feb 23 02:39:10 localhost podman[33570]: 2026-02-23 07:39:10.514573922 +0000 UTC m=+0.148235225 container attach b644ea949a18c12f056516f8c4a72cb67f7592508cd5db35ab69dcbd0715b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_torvalds, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.buildah.version=1.42.2, version=7, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc.) Feb 23 02:39:10 localhost systemd[1]: libpod-b644ea949a18c12f056516f8c4a72cb67f7592508cd5db35ab69dcbd0715b299.scope: Deactivated successfully. Feb 23 02:39:10 localhost magical_torvalds[33586]: 167 167 Feb 23 02:39:10 localhost podman[33570]: 2026-02-23 07:39:10.517146864 +0000 UTC m=+0.150808177 container died b644ea949a18c12f056516f8c4a72cb67f7592508cd5db35ab69dcbd0715b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_torvalds, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.tags=rhceph ceph, architecture=x86_64, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, version=7, release=1770267347, ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=) Feb 23 02:39:10 localhost systemd[1]: var-lib-containers-storage-overlay-0a3608b76f50d87c3e72595d62b30412aa4dbfc10b5bf9b889a0c42800bbb4e9-merged.mount: Deactivated successfully. Feb 23 02:39:10 localhost podman[33591]: 2026-02-23 07:39:10.615131561 +0000 UTC m=+0.085765564 container remove b644ea949a18c12f056516f8c4a72cb67f7592508cd5db35ab69dcbd0715b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_torvalds, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.42.2, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, build-date=2026-02-09T10:25:24Z, distribution-scope=public) Feb 23 02:39:10 localhost systemd[1]: libpod-conmon-b644ea949a18c12f056516f8c4a72cb67f7592508cd5db35ab69dcbd0715b299.scope: Deactivated successfully. Feb 23 02:39:10 localhost ceph-osd[32652]: osd.3 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=17) [3,5,4] r=0 lpr=17 pi=[15,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [3] -> [3,5,4], acting [3] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:39:10 localhost ceph-osd[32652]: osd.3 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=17) [3,5,4] r=0 lpr=17 pi=[15,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:39:10 localhost podman[33613]: Feb 23 02:39:10 localhost podman[33613]: 2026-02-23 07:39:10.824213049 +0000 UTC m=+0.076245775 container create ffc2a6aaac5598ec107051a7e155abe4e61ddea89eccc966ce6d60b7f5e38f65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, GIT_BRANCH=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-type=git, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph) Feb 23 02:39:10 localhost systemd[1]: Started libpod-conmon-ffc2a6aaac5598ec107051a7e155abe4e61ddea89eccc966ce6d60b7f5e38f65.scope. Feb 23 02:39:10 localhost systemd[1]: Started libcrun container. Feb 23 02:39:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03924f67bdd38e65980ea9372b0d1354c61fc0f1179271d75e127f60504499d/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:10 localhost podman[33613]: 2026-02-23 07:39:10.796794809 +0000 UTC m=+0.048827525 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 02:39:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03924f67bdd38e65980ea9372b0d1354c61fc0f1179271d75e127f60504499d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03924f67bdd38e65980ea9372b0d1354c61fc0f1179271d75e127f60504499d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 02:39:10 localhost podman[33613]: 2026-02-23 07:39:10.919967622 +0000 UTC m=+0.172000338 container init ffc2a6aaac5598ec107051a7e155abe4e61ddea89eccc966ce6d60b7f5e38f65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, ceph=True, release=1770267347, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git) Feb 23 02:39:10 localhost podman[33613]: 2026-02-23 07:39:10.929404788 +0000 UTC m=+0.181437504 container start ffc2a6aaac5598ec107051a7e155abe4e61ddea89eccc966ce6d60b7f5e38f65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64) Feb 23 02:39:10 localhost podman[33613]: 2026-02-23 07:39:10.929731256 +0000 UTC m=+0.181763972 container attach ffc2a6aaac5598ec107051a7e155abe4e61ddea89eccc966ce6d60b7f5e38f65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, io.openshift.expose-services=, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc.) Feb 23 02:39:11 localhost elastic_shockley[33628]: [ Feb 23 02:39:11 localhost elastic_shockley[33628]: { Feb 23 02:39:11 localhost elastic_shockley[33628]: "available": false, Feb 23 02:39:11 localhost elastic_shockley[33628]: "ceph_device": false, Feb 23 02:39:11 localhost elastic_shockley[33628]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 23 02:39:11 localhost elastic_shockley[33628]: "lsm_data": {}, Feb 23 02:39:11 localhost elastic_shockley[33628]: "lvs": [], Feb 23 02:39:11 localhost elastic_shockley[33628]: "path": "/dev/sr0", Feb 23 02:39:11 localhost elastic_shockley[33628]: "rejected_reasons": [ Feb 23 02:39:11 localhost elastic_shockley[33628]: "Insufficient space (<5GB)", Feb 23 02:39:11 localhost elastic_shockley[33628]: "Has a FileSystem" Feb 23 02:39:11 localhost elastic_shockley[33628]: ], Feb 23 02:39:11 localhost elastic_shockley[33628]: "sys_api": { Feb 23 02:39:11 localhost elastic_shockley[33628]: "actuators": null, Feb 23 02:39:11 localhost elastic_shockley[33628]: "device_nodes": "sr0", Feb 23 02:39:11 localhost elastic_shockley[33628]: "human_readable_size": "482.00 KB", Feb 23 02:39:11 localhost elastic_shockley[33628]: "id_bus": "ata", Feb 23 02:39:11 localhost elastic_shockley[33628]: "model": "QEMU DVD-ROM", Feb 23 02:39:11 localhost elastic_shockley[33628]: "nr_requests": "2", Feb 23 02:39:11 localhost elastic_shockley[33628]: "partitions": {}, Feb 23 02:39:11 localhost elastic_shockley[33628]: "path": "/dev/sr0", Feb 23 02:39:11 localhost elastic_shockley[33628]: "removable": "1", Feb 23 02:39:11 localhost elastic_shockley[33628]: "rev": "2.5+", Feb 23 02:39:11 localhost elastic_shockley[33628]: "ro": "0", Feb 23 02:39:11 localhost elastic_shockley[33628]: "rotational": "1", Feb 23 02:39:11 localhost elastic_shockley[33628]: "sas_address": "", Feb 23 02:39:11 localhost elastic_shockley[33628]: "sas_device_handle": "", Feb 23 02:39:11 localhost elastic_shockley[33628]: "scheduler_mode": "mq-deadline", Feb 23 02:39:11 localhost elastic_shockley[33628]: "sectors": 0, Feb 23 02:39:11 localhost elastic_shockley[33628]: "sectorsize": "2048", Feb 23 02:39:11 localhost elastic_shockley[33628]: "size": 493568.0, Feb 23 02:39:11 localhost elastic_shockley[33628]: "support_discard": "0", Feb 23 02:39:11 localhost elastic_shockley[33628]: "type": "disk", Feb 23 02:39:11 localhost elastic_shockley[33628]: "vendor": "QEMU" Feb 23 02:39:11 localhost elastic_shockley[33628]: } Feb 23 02:39:11 localhost elastic_shockley[33628]: } Feb 23 02:39:11 localhost elastic_shockley[33628]: ] Feb 23 02:39:11 localhost systemd[1]: libpod-ffc2a6aaac5598ec107051a7e155abe4e61ddea89eccc966ce6d60b7f5e38f65.scope: Deactivated successfully. Feb 23 02:39:11 localhost podman[33613]: 2026-02-23 07:39:11.819694418 +0000 UTC m=+1.071727104 container died ffc2a6aaac5598ec107051a7e155abe4e61ddea89eccc966ce6d60b7f5e38f65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, release=1770267347, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public) Feb 23 02:39:11 localhost systemd[1]: var-lib-containers-storage-overlay-f03924f67bdd38e65980ea9372b0d1354c61fc0f1179271d75e127f60504499d-merged.mount: Deactivated successfully. Feb 23 02:39:11 localhost podman[35101]: 2026-02-23 07:39:11.915645035 +0000 UTC m=+0.082830672 container remove ffc2a6aaac5598ec107051a7e155abe4e61ddea89eccc966ce6d60b7f5e38f65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.42.2, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347) Feb 23 02:39:11 localhost systemd[1]: libpod-conmon-ffc2a6aaac5598ec107051a7e155abe4e61ddea89eccc966ce6d60b7f5e38f65.scope: Deactivated successfully. Feb 23 02:39:12 localhost ceph-osd[32652]: osd.3 pg_epoch: 18 pg[1.0( empty local-lis/les=17/18 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=17) [3,5,4] r=0 lpr=17 pi=[15,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:39:21 localhost podman[35227]: 2026-02-23 07:39:21.02134478 +0000 UTC m=+0.097637679 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, vcs-type=git, build-date=2026-02-09T10:25:24Z, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 02:39:21 localhost podman[35227]: 2026-02-23 07:39:21.138166158 +0000 UTC m=+0.214459057 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 23 02:39:27 localhost sshd[35304]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:40:19 localhost sshd[35306]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:40:22 localhost systemd[1]: tmp-crun.EDcpGO.mount: Deactivated successfully. Feb 23 02:40:22 localhost podman[35402]: 2026-02-23 07:40:22.940532039 +0000 UTC m=+0.087069388 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-type=git, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph) Feb 23 02:40:23 localhost podman[35402]: 2026-02-23 07:40:23.051962061 +0000 UTC m=+0.198499490 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, distribution-scope=public) Feb 23 02:40:27 localhost systemd[1]: session-13.scope: Deactivated successfully. Feb 23 02:40:27 localhost systemd[1]: session-13.scope: Consumed 21.530s CPU time. Feb 23 02:40:27 localhost systemd-logind[759]: Session 13 logged out. Waiting for processes to exit. Feb 23 02:40:27 localhost systemd-logind[759]: Removed session 13. Feb 23 02:40:48 localhost sshd[35546]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:41:11 localhost sshd[35548]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:42:06 localhost sshd[35627]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:42:58 localhost sshd[35705]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:43:05 localhost systemd[26368]: Created slice User Background Tasks Slice. Feb 23 02:43:05 localhost systemd[26368]: Starting Cleanup of User's Temporary Files and Directories... Feb 23 02:43:05 localhost systemd[26368]: Finished Cleanup of User's Temporary Files and Directories. Feb 23 02:43:52 localhost sshd[35786]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:43:55 localhost sshd[35788]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:43:55 localhost systemd-logind[759]: New session 27 of user zuul. Feb 23 02:43:55 localhost systemd[1]: Started Session 27 of User zuul. Feb 23 02:43:55 localhost python3[35836]: ansible-ansible.legacy.ping Invoked with data=pong Feb 23 02:43:57 localhost python3[35881]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 02:43:57 localhost python3[35901]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005626465.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 23 02:43:58 localhost python3[35957]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:43:58 localhost python3[36000]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771832638.0657609-66850-177662124682002/source _original_basename=tmpymoblkwm follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:43:59 localhost python3[36030]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:43:59 localhost python3[36046]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:43:59 localhost python3[36062]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:44:00 localhost python3[36078]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaFiAv9bTisS17GN1FZ7h/VJaLu2YTZdiVc9K45WqX/JhZ8Pwx1BXqBJEGlK+qmwEqEak4GGxIE3mEhiQwiY15E3nfJntSoOu8LhTWltcLr5Uy2mx8nXUZwB3QoNiY4y5uXCriTu6HLCXbdwhnWWT8PgX7V31GosHI0JQEpo4ixoShAJC2sFXL7wd3dTaZBo73qVrUhmekv/2GJo179k6wGblVjOwgB9nkfcuo0acLUaot1Uhc7ZrZM3Nfa7bjrW7OigrLvtNra7bBsjfeTgu6vOxxy1DcTD1xBKab631zTIIugiPViMTGrxgsBpyOc4tJpl1grZCJxm8mBDN25oK2jvP/NwUcC3C9ASWEr9U4QiAOTYN03OAyLGbhq48W3SYHwIi8/awDfJapvA+5rlCq/Xb+Fi/KrAaPxVoEehqWLBzCv0u/ZLZarmOih5rcgNTLMQ3l1/nVNtx+VP3eLtqA5a71JqntFfS80adH/3Px+Wen0lIixRqguGNzrD8ZGcU= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:44:01 localhost python3[36092]: ansible-ping Invoked with data=pong Feb 23 02:44:12 localhost sshd[36093]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:44:12 localhost systemd[1]: Created slice User Slice of UID 1003. Feb 23 02:44:12 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Feb 23 02:44:12 localhost systemd-logind[759]: New session 28 of user tripleo-admin. Feb 23 02:44:12 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Feb 23 02:44:12 localhost systemd[1]: Starting User Manager for UID 1003... Feb 23 02:44:12 localhost systemd[36097]: Queued start job for default target Main User Target. Feb 23 02:44:12 localhost systemd[36097]: Created slice User Application Slice. Feb 23 02:44:12 localhost systemd[36097]: Started Mark boot as successful after the user session has run 2 minutes. Feb 23 02:44:12 localhost systemd[36097]: Started Daily Cleanup of User's Temporary Directories. Feb 23 02:44:12 localhost systemd[36097]: Reached target Paths. Feb 23 02:44:12 localhost systemd[36097]: Reached target Timers. Feb 23 02:44:12 localhost systemd[36097]: Starting D-Bus User Message Bus Socket... Feb 23 02:44:12 localhost systemd[36097]: Starting Create User's Volatile Files and Directories... Feb 23 02:44:12 localhost systemd[36097]: Finished Create User's Volatile Files and Directories. Feb 23 02:44:12 localhost systemd[36097]: Listening on D-Bus User Message Bus Socket. Feb 23 02:44:12 localhost systemd[36097]: Reached target Sockets. Feb 23 02:44:12 localhost systemd[36097]: Reached target Basic System. Feb 23 02:44:12 localhost systemd[36097]: Reached target Main User Target. Feb 23 02:44:12 localhost systemd[36097]: Startup finished in 117ms. Feb 23 02:44:12 localhost systemd[1]: Started User Manager for UID 1003. Feb 23 02:44:12 localhost systemd[1]: Started Session 28 of User tripleo-admin. Feb 23 02:44:13 localhost python3[36157]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Feb 23 02:44:18 localhost python3[36177]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Feb 23 02:44:19 localhost python3[36193]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Feb 23 02:44:19 localhost python3[36241]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.e990y00qtmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:44:20 localhost python3[36271]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.e990y00qtmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:44:21 localhost python3[36287]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.e990y00qtmphosts insertbefore=BOF block=172.17.0.106 np0005626463.localdomain np0005626463#012172.18.0.106 np0005626463.storage.localdomain np0005626463.storage#012172.20.0.106 np0005626463.storagemgmt.localdomain np0005626463.storagemgmt#012172.17.0.106 np0005626463.internalapi.localdomain np0005626463.internalapi#012172.19.0.106 np0005626463.tenant.localdomain np0005626463.tenant#012192.168.122.106 np0005626463.ctlplane.localdomain np0005626463.ctlplane#012172.17.0.107 np0005626465.localdomain np0005626465#012172.18.0.107 np0005626465.storage.localdomain np0005626465.storage#012172.20.0.107 np0005626465.storagemgmt.localdomain np0005626465.storagemgmt#012172.17.0.107 np0005626465.internalapi.localdomain np0005626465.internalapi#012172.19.0.107 np0005626465.tenant.localdomain np0005626465.tenant#012192.168.122.107 np0005626465.ctlplane.localdomain np0005626465.ctlplane#012172.17.0.108 np0005626466.localdomain np0005626466#012172.18.0.108 np0005626466.storage.localdomain np0005626466.storage#012172.20.0.108 np0005626466.storagemgmt.localdomain np0005626466.storagemgmt#012172.17.0.108 np0005626466.internalapi.localdomain np0005626466.internalapi#012172.19.0.108 np0005626466.tenant.localdomain np0005626466.tenant#012192.168.122.108 np0005626466.ctlplane.localdomain np0005626466.ctlplane#012172.17.0.103 np0005626459.localdomain np0005626459#012172.18.0.103 np0005626459.storage.localdomain np0005626459.storage#012172.20.0.103 np0005626459.storagemgmt.localdomain np0005626459.storagemgmt#012172.17.0.103 np0005626459.internalapi.localdomain np0005626459.internalapi#012172.19.0.103 np0005626459.tenant.localdomain np0005626459.tenant#012192.168.122.103 np0005626459.ctlplane.localdomain np0005626459.ctlplane#012172.17.0.104 np0005626460.localdomain np0005626460#012172.18.0.104 np0005626460.storage.localdomain np0005626460.storage#012172.20.0.104 np0005626460.storagemgmt.localdomain np0005626460.storagemgmt#012172.17.0.104 np0005626460.internalapi.localdomain np0005626460.internalapi#012172.19.0.104 np0005626460.tenant.localdomain np0005626460.tenant#012192.168.122.104 np0005626460.ctlplane.localdomain np0005626460.ctlplane#012172.17.0.105 np0005626461.localdomain np0005626461#012172.18.0.105 np0005626461.storage.localdomain np0005626461.storage#012172.20.0.105 np0005626461.storagemgmt.localdomain np0005626461.storagemgmt#012172.17.0.105 np0005626461.internalapi.localdomain np0005626461.internalapi#012172.19.0.105 np0005626461.tenant.localdomain np0005626461.tenant#012192.168.122.105 np0005626461.ctlplane.localdomain np0005626461.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.134 overcloud.storage.localdomain#012172.20.0.172 overcloud.storagemgmt.localdomain#012172.17.0.129 overcloud.internalapi.localdomain#012172.21.0.176 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:44:21 localhost python3[36303]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.e990y00qtmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:44:22 localhost python3[36320]: ansible-file Invoked with path=/tmp/ansible.e990y00qtmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:44:23 localhost python3[36336]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:44:23 localhost python3[36353]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:44:28 localhost python3[36372]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:44:29 localhost python3[36389]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:44:43 localhost sshd[36652]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:45:33 localhost sshd[37605]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:45:35 localhost kernel: SELinux: Converting 2700 SID table entries... Feb 23 02:45:35 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 02:45:35 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 02:45:35 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 02:45:35 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 02:45:35 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 02:45:35 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 02:45:35 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 02:45:35 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=6 res=1 Feb 23 02:45:36 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 02:45:36 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 02:45:36 localhost systemd[1]: Reloading. Feb 23 02:45:36 localhost systemd-sysv-generator[37807]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:45:36 localhost systemd-rc-local-generator[37802]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:45:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:45:36 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 02:45:36 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 02:45:36 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 02:45:36 localhost systemd[1]: run-r1bd22bb0b04446ee8d4b31516ac721d4.service: Deactivated successfully. Feb 23 02:45:39 localhost python3[38270]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:45:40 localhost python3[38409]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:45:40 localhost systemd[1]: Reloading. Feb 23 02:45:40 localhost systemd-sysv-generator[38442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:45:40 localhost systemd-rc-local-generator[38436]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:45:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:45:42 localhost python3[38463]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:45:42 localhost python3[38479]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:45:43 localhost python3[38496]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 23 02:45:44 localhost python3[38514]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:45:44 localhost python3[38532]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:45:45 localhost python3[38550]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 02:45:45 localhost systemd[1]: Reloading Network Manager... Feb 23 02:45:45 localhost NetworkManager[5987]: [1771832745.2225] audit: op="reload" arg="0" pid=38553 uid=0 result="success" Feb 23 02:45:45 localhost NetworkManager[5987]: [1771832745.2235] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Feb 23 02:45:45 localhost NetworkManager[5987]: [1771832745.2236] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Feb 23 02:45:45 localhost systemd[1]: Reloaded Network Manager. Feb 23 02:45:45 localhost python3[38569]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:45:46 localhost python3[38586]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:45:46 localhost python3[38604]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:45:46 localhost python3[38620]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:45:47 localhost python3[38636]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Feb 23 02:45:48 localhost python3[38652]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:45:49 localhost python3[38668]: ansible-blockinfile Invoked with path=/tmp/ansible.ozpjv3y4 block=[192.168.122.106]*,[np0005626463.ctlplane.localdomain]*,[172.17.0.106]*,[np0005626463.internalapi.localdomain]*,[172.18.0.106]*,[np0005626463.storage.localdomain]*,[172.20.0.106]*,[np0005626463.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005626463.tenant.localdomain]*,[np0005626463.localdomain]*,[np0005626463]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/Caj4zYKd24ctvaRU1Hf9nT058OF4bRnDJ3bHimmkyIL7cccXAxo3lx50wZHWRYBhF5Wes6TmqnUTTK1h5wVdI8f7YtQ9IyMIlfoEiTThF5PgODVuRYq+YGjFIy7MTPyBnB2428aT4dlYqHSuxK2gL6ALlCJHNyeh3RW3jCOG89veDoRmbqHGoaD+xPRnfsdHLoLFNfxT4UJiKRuqsEd5fNtc392ROSa5XM3PPIs3YTypYmpfFHs1B1j+y6oZV8Ha/QXqURpI7/aJmfnDzXLMsLWp4GRpkwzljvNp87S5HL+kJMo79n0Vmh2JdN1orNP/4A2t/TENckHbrZCm+YmPqUqvpHkAZfFfmvP62YZTPq/qOjBMMq6ulGSHd2I4XfE7NNZRKoS3G4HVlBb0ONS13PaWx9rrJCRlF64L1dHSt9zpKrvRbWkSdXA0PwwehrU5/OBo1IY4WsRlWmPeET1/dFWiIr1t9uGjp5vmACAx7rnC6G5qSEhQ3/k1Wa57k/k=#012[192.168.122.107]*,[np0005626465.ctlplane.localdomain]*,[172.17.0.107]*,[np0005626465.internalapi.localdomain]*,[172.18.0.107]*,[np0005626465.storage.localdomain]*,[172.20.0.107]*,[np0005626465.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005626465.tenant.localdomain]*,[np0005626465.localdomain]*,[np0005626465]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUc8l2oYgfdO7xb3vN27co3Q/sFNU6Rw5wThiW1JMfeIzI90ZzS/L+BpsDsX8q2CW9QOHXrbUormpGsiNnix5j1P29Jc6e9A2mDlipXBrFSUiVZa8UOL03lFSz4nElapkASin2GCdHqy7//gGdQMKRP62VXpdhofb7i/N/gGoV5hSc8Q36KFDbWpvPkhD5H8nZtAfyxM99KwlC62D8jSN+gdoRtMRFPQTtyvyskyrgnXGC6xV71WTa6LJ6Meo7tfj4JlvDAWwlD+f9Ruu2ty2aHd2feVVKYvxZ4Z45iSfJnNxRFJvu1QOY0IU4Fj942leKwr6f0B5ogPFlTI7wRrAB1d9tri1WW2aL1AqYhdZscWi0VArYxLQr7BCVqz8KgFIzjbPoJ7uYnWcuDSiWlC1NJVO7Ij2natf8wZyvSyH+vydamkyoaNwxMnm4qs0/rvjwL49MdrHB79rXjHYJpt/JCBvn9a/rh5KqVH40P00DP35H71zyHPCSu1L20S/wY1k=#012[192.168.122.108]*,[np0005626466.ctlplane.localdomain]*,[172.17.0.108]*,[np0005626466.internalapi.localdomain]*,[172.18.0.108]*,[np0005626466.storage.localdomain]*,[172.20.0.108]*,[np0005626466.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005626466.tenant.localdomain]*,[np0005626466.localdomain]*,[np0005626466]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD4dg5LfbOyIHJudQjfDyIcqYXRqMUeYQIpjQPmNS0Tl7/EpBaYixjqlNovKIWOwkS4E2n4hwPLSTGSihYb5BeUDw32T80RumycS2tjBCSLiuq93xpTOaL2X+7wykkOSfY5xya13qrTg0ROJip0B6PSSF+Rn28SAKLh91euCdRaxWTAMeOSTP9WeCA3d0gsgb4xSMMWZxR4o1BU2bixjAcJHAlKYDc1OGpKkirRoziu9Y4nq2lmbwTg5HiS8STVkqyGHba9k6IC0eF2ZmT6M2thoHatYVtjuUeEE9bSvaAFB8oSI9Np6+OaluvuoKJYjRA3dzEQOi4ft/wwUrJfvyypDAxKBkxo7lCWIDEBK5Zb9BVoo68psz2IVPNGNZJtKXiq58CAqZTR02l/wEq4wB1/hp7ZW+ZMnHQUq1FpGITIA89KZeL9xNlnHqYak58B2GCYgK6OdvWktr4WHN8nbEmwZvaTrijZvnww7h2FQG4BMcSlO6AWKAdjksJZlVDYLJs=#012[192.168.122.103]*,[np0005626459.ctlplane.localdomain]*,[172.17.0.103]*,[np0005626459.internalapi.localdomain]*,[172.18.0.103]*,[np0005626459.storage.localdomain]*,[172.20.0.103]*,[np0005626459.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005626459.tenant.localdomain]*,[np0005626459.localdomain]*,[np0005626459]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9VsrIfV6Z4AiMtHfmjOpcBCt5sMsGmP0fOSak1UBP4r9lW4eYyoJY7Rtt1LDAcbGqdL3Nh3yc8ub0ekpXF6MA0vKucLb+jtjexv6t21W2grJ+ucwsvDhTDhDXmOUwD5G7A9Zj2WDqt/DN4DxeEqvQ6v1dSQaG+17BVPvM7mhgd5CSYOdUphCC81TPZgj3xyK31Q89biIS6pCBSKnsyN7qcU38bFGvRN0sTFaFt9KrIUfJJdcAZudw5Q/R775pmaaeHTSVPL05gE7dyz8RicEpenh6X0aZCOVt0+4VBnfXXSIL9QIwjrarPPKRdtmQY7dZ3dVNI1ZWA5YOl0y6R3fmxaRV5y1ZkDW6vG0463hYjKaAVqILAAPZGzhuzL7/1zxIv0guUB58tOUrCkkPIRzd6NQLL2j8L7RLIj3bZjG2xf0WiierxPsCEhl3wmdIVRUReE6jYalNGlscGUr1JWproKoaQqfck0OWhGy7jCCe8Gd8a/pr7jtg+X3bEMQ3HAc=#012[192.168.122.104]*,[np0005626460.ctlplane.localdomain]*,[172.17.0.104]*,[np0005626460.internalapi.localdomain]*,[172.18.0.104]*,[np0005626460.storage.localdomain]*,[172.20.0.104]*,[np0005626460.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005626460.tenant.localdomain]*,[np0005626460.localdomain]*,[np0005626460]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeQmwl5IUCA7h6xphf+o3WARi0Xlj+0K08ltN/FCX7iF0EALCfDqtKOHz7wv5gS04Zx4aeNfcVHv9bHLRJxTPzliSNVutqA7vdFa0R/kRMdNzkqSOCuJ64sQ8GwSOHSrcFy7qC87BuP6xB9atSBjpAEB4NZOuXbvmSN/dCa/nNpUWoWNNg3eR5AalrExCptFYZ4E7YWvJ6HdZpr1QhcAJW0V1y4+u4FfzxHT2SQfGmua4TFHH1lUMiMrgAoELLe+pYdnWooEhRlkPulWy/wOyNz7aCCDP462XBhCc0CmiBDRwMBaJISck1pJCOIksvu8TYa6Fp8aayZqJvbUJYl5C1Z/o+zgHMTjeec0Th5GIuw9XUJkkx8TT5Fh7aWJvX9BbHlMaJjAqc+G/wiIImvKlsuIsovU6TH0P/XiysoWXeUWM7JqR8Y/05+yELy+xAMKT7PfEXE1fWOlGcCJsarLYGhh/7Jypwfh8Y/wOtYdKOGODxDnzq2f2VySsEiAf0EL0=#012[192.168.122.105]*,[np0005626461.ctlplane.localdomain]*,[172.17.0.105]*,[np0005626461.internalapi.localdomain]*,[172.18.0.105]*,[np0005626461.storage.localdomain]*,[172.20.0.105]*,[np0005626461.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005626461.tenant.localdomain]*,[np0005626461.localdomain]*,[np0005626461]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDBCzU24t9gA5R+exm4rHJ2VytHuq8uUoKuu6SZ07dskKR77n7TwlsZhsDjpzwsddHd+lvsfvOVmolxjJsCmq7LJRMGA/mczHXsGGb43YPZPKsiJ6KMPDORy5/ihhnqixBYVmBGtdPu/Hh/udGnymZgR/RYGltDDHoCfGGiEcHJSIuf/Bv2Uv4xFnxFjDrWQFrkJ5Grq1xC7cGXgC3gAiTCjGHkG9rb/oyTUjjM8LaaRYIjeoDQZu1/8y5pl6cnhW21VTA+u55SkSimb/g5oOuSmrv899iHFwb54uLINXvA4aTtduUnxNQBVRyFvWa3yCZXVJeYlcVP8Q9tljn9anN1aISnS311Jmay6zUY927bxnzrpkwaV7Ggwtvi6vlVy84ZvOJ/IJ2boDiMujh1ZpT3bxXG3Oy0EjfBVbpkS6r2MbGTPj/xWnosJ6JNVbb9LW7Ftfi3/NFfAb7PpTgY036DA8LYoYIfqxVJUhlo5fJjqqOLa/zbvZVwrFCG+Zm160=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:45:49 localhost python3[38684]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ozpjv3y4' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:45:49 localhost python3[38702]: ansible-file Invoked with path=/tmp/ansible.ozpjv3y4 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:45:50 localhost python3[38718]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 02:45:50 localhost python3[38734]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:45:51 localhost python3[38752]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:45:51 localhost python3[38771]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Feb 23 02:45:54 localhost python3[38908]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:45:54 localhost python3[38925]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:45:57 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Feb 23 02:45:57 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Feb 23 02:45:58 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 02:45:58 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 02:45:58 localhost systemd[1]: Reloading. Feb 23 02:45:58 localhost systemd-rc-local-generator[38974]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:45:58 localhost systemd-sysv-generator[38980]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:45:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:45:58 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 02:45:58 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Feb 23 02:45:58 localhost systemd[1]: tuned.service: Deactivated successfully. Feb 23 02:45:58 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Feb 23 02:45:58 localhost systemd[1]: tuned.service: Consumed 1.810s CPU time. Feb 23 02:45:58 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 23 02:45:58 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 02:45:58 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 02:45:58 localhost systemd[1]: run-r90a21e0e6da9429caebdc3a29b179d0a.service: Deactivated successfully. Feb 23 02:45:59 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 23 02:45:59 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 02:45:59 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 02:46:00 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 02:46:00 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 02:46:00 localhost systemd[1]: run-r56fea95240084a11a89b18c64ccb64c7.service: Deactivated successfully. Feb 23 02:46:01 localhost python3[39362]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:46:01 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Feb 23 02:46:01 localhost systemd[1]: tuned.service: Deactivated successfully. Feb 23 02:46:01 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Feb 23 02:46:01 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 23 02:46:02 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 23 02:46:02 localhost python3[39558]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:03 localhost python3[39575]: ansible-slurp Invoked with src=/etc/tuned/active_profile Feb 23 02:46:04 localhost python3[39591]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:46:04 localhost python3[39607]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:07 localhost python3[39627]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:07 localhost python3[39644]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:46:10 localhost python3[39660]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:15 localhost python3[39676]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:15 localhost python3[39724]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:16 localhost python3[39769]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832775.314773-71387-101300895181296/source _original_basename=tmpzkuh6m69 follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:16 localhost python3[39799]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:17 localhost python3[39847]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:17 localhost python3[39890]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832776.9564304-71481-10353463919038/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=cb4e2d65c3f4c3faf38650c4c339d73dfcec347e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:18 localhost python3[39952]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:18 localhost python3[39995]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832777.8551245-71539-163814469964722/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=e3816c2e211db94b1efb9354b78e4bda87216798 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:19 localhost python3[40057]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:19 localhost python3[40100]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832778.6464536-71539-147256783845819/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=426c74ff16c690bcb458d5adf7a90df54cf7398a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:20 localhost python3[40162]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:20 localhost python3[40205]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832779.5422688-71539-1213517913178/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=175c760950d63a47f443f25b58088dba962f090b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:20 localhost python3[40267]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:21 localhost python3[40310]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832780.5754845-71539-50171020513164/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:21 localhost python3[40372]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:22 localhost python3[40415]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832781.5070558-71539-199896210993025/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=253fa85d3d866720b229521714157ddd9ccdc064 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:22 localhost sshd[40459]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:46:22 localhost python3[40479]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:23 localhost python3[40522]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832782.3953588-71539-255089350287985/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:23 localhost python3[40584]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:24 localhost python3[40627]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832783.2938993-71539-202407057362035/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=4e4e677ff4d1886f9c2ad18567185be59ce1ed84 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:24 localhost python3[40689]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:24 localhost python3[40732]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832784.1541827-71539-252565418251476/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:25 localhost python3[40794]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:25 localhost python3[40837]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832784.9955587-71539-147547869525965/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:26 localhost python3[40899]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:26 localhost python3[40942]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832785.8757956-71539-127938808544727/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=95ce3497cf46725d61795ae612b41f90ffd1fce3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:27 localhost python3[40972]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:46:28 localhost python3[41020]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:46:28 localhost python3[41063]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832787.7918928-72350-219818256816271/source _original_basename=tmptu7bsgz2 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:46:33 localhost python3[41093]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 02:46:33 localhost python3[41154]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:38 localhost python3[41215]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:43 localhost python3[41264]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:44 localhost python3[41287]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:48 localhost python3[41304]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:49 localhost python3[41327]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:53 localhost python3[41344]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:58 localhost python3[41361]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:46:58 localhost python3[41384]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:02 localhost python3[41401]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:05 localhost systemd[36097]: Starting Mark boot as successful... Feb 23 02:47:05 localhost systemd[36097]: Finished Mark boot as successful. Feb 23 02:47:07 localhost python3[41419]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:07 localhost python3[41442]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:12 localhost python3[41459]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:12 localhost sshd[41461]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:47:16 localhost python3[41478]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:17 localhost python3[41501]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:21 localhost python3[41518]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:27 localhost python3[41535]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:28 localhost python3[41583]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:28 localhost python3[41601]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmpyzd8j94g recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:28 localhost python3[41631]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:29 localhost python3[41679]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:29 localhost python3[41697]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:30 localhost python3[41759]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:30 localhost python3[41777]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:31 localhost python3[41839]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:31 localhost python3[41857]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:31 localhost python3[41919]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:32 localhost python3[41937]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:32 localhost python3[41999]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:32 localhost python3[42017]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:33 localhost python3[42079]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:33 localhost python3[42097]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:34 localhost python3[42159]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:34 localhost python3[42177]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:34 localhost python3[42239]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:35 localhost python3[42257]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:35 localhost python3[42319]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:35 localhost python3[42337]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:36 localhost python3[42399]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:36 localhost python3[42417]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:37 localhost python3[42479]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:37 localhost python3[42497]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:38 localhost python3[42527]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:47:38 localhost python3[42575]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:39 localhost python3[42623]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmp5gt6mhcw recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:42 localhost python3[42699]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:47:47 localhost python3[42717]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:47:47 localhost python3[42735]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:47:48 localhost python3[42753]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:47:48 localhost systemd[1]: Reloading. Feb 23 02:47:48 localhost systemd-rc-local-generator[42780]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:47:48 localhost systemd-sysv-generator[42783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:47:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:47:48 localhost systemd[1]: Starting Netfilter Tables... Feb 23 02:47:48 localhost systemd[1]: Finished Netfilter Tables. Feb 23 02:47:49 localhost python3[42843]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:49 localhost python3[42886]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832869.2413123-75132-151290493319892/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:50 localhost python3[42916]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:50 localhost python3[42934]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:51 localhost python3[42983]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:51 localhost python3[43026]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832871.00495-75243-30521568084467/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:52 localhost python3[43088]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:52 localhost python3[43131]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832871.9445534-75288-99785509366867/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:53 localhost python3[43193]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:53 localhost python3[43236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832873.0195947-75349-158002574497432/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:54 localhost python3[43298]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:54 localhost python3[43341]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832873.8737261-75405-21921436103042/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:55 localhost python3[43403]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:47:56 localhost python3[43446]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832874.7804658-75683-234724563893428/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:56 localhost python3[43476]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:57 localhost python3[43541]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:47:57 localhost python3[43558]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:57 localhost python3[43575]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:47:58 localhost python3[43594]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:47:58 localhost python3[43610]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:47:59 localhost python3[43626]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:47:59 localhost python3[43642]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Feb 23 02:48:00 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=7 res=1 Feb 23 02:48:01 localhost python3[43662]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 23 02:48:01 localhost kernel: SELinux: Converting 2704 SID table entries... Feb 23 02:48:01 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 02:48:01 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 02:48:01 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 02:48:01 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 02:48:01 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 02:48:01 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 02:48:01 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 02:48:02 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=8 res=1 Feb 23 02:48:02 localhost python3[43683]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 23 02:48:02 localhost kernel: SELinux: Converting 2704 SID table entries... Feb 23 02:48:02 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 02:48:02 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 02:48:02 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 02:48:02 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 02:48:02 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 02:48:02 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 02:48:02 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 02:48:03 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=9 res=1 Feb 23 02:48:03 localhost python3[43704]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 23 02:48:04 localhost kernel: SELinux: Converting 2704 SID table entries... Feb 23 02:48:04 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 02:48:04 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 02:48:04 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 02:48:04 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 02:48:04 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 02:48:04 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 02:48:04 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 02:48:04 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=10 res=1 Feb 23 02:48:04 localhost python3[43725]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:04 localhost sshd[43726]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:48:05 localhost python3[43743]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:05 localhost python3[43759]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:06 localhost python3[43775]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:48:06 localhost python3[43791]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:48:07 localhost python3[43808]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:48:11 localhost python3[43825]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:11 localhost python3[43873]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:48:11 localhost python3[43916]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832891.2797737-76427-131644057509177/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:12 localhost python3[43946]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 02:48:12 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 23 02:48:12 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 02:48:12 localhost systemd[1]: Stopping Load Kernel Modules... Feb 23 02:48:12 localhost systemd[1]: Starting Load Kernel Modules... Feb 23 02:48:12 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 23 02:48:12 localhost kernel: Bridge firewalling registered Feb 23 02:48:12 localhost systemd-modules-load[43949]: Inserted module 'br_netfilter' Feb 23 02:48:12 localhost systemd-modules-load[43949]: Module 'msr' is built in Feb 23 02:48:12 localhost systemd[1]: Finished Load Kernel Modules. Feb 23 02:48:13 localhost python3[44000]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:48:13 localhost python3[44043]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832892.7548223-76468-148160844696931/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:14 localhost python3[44073]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:14 localhost python3[44090]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:14 localhost python3[44108]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:16 localhost python3[44126]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:16 localhost python3[44143]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:16 localhost python3[44160]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:16 localhost python3[44177]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:17 localhost python3[44195]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:17 localhost python3[44213]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:17 localhost python3[44231]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:18 localhost python3[44249]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:18 localhost python3[44267]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:18 localhost python3[44285]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:19 localhost python3[44303]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:19 localhost python3[44320]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:19 localhost python3[44337]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:20 localhost python3[44354]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:20 localhost python3[44371]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 23 02:48:20 localhost python3[44389]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 02:48:20 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 23 02:48:20 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 23 02:48:20 localhost systemd[1]: Stopping Apply Kernel Variables... Feb 23 02:48:20 localhost systemd[1]: Starting Apply Kernel Variables... Feb 23 02:48:20 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 23 02:48:20 localhost systemd[1]: Finished Apply Kernel Variables. Feb 23 02:48:21 localhost python3[44410]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:21 localhost python3[44426]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:21 localhost python3[44442]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:22 localhost python3[44458]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:48:22 localhost python3[44474]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:22 localhost python3[44490]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:23 localhost python3[44506]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:23 localhost python3[44522]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:23 localhost python3[44538]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:24 localhost python3[44586]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:48:24 localhost python3[44629]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832903.7472892-76859-135987581002005/source _original_basename=tmpippzwn1y follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:24 localhost python3[44659]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:48:26 localhost python3[44676]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:26 localhost python3[44724]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:48:27 localhost python3[44767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832906.604459-77285-38837986601970/source _original_basename=tmpi8h01yr5 follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:27 localhost python3[44797]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:28 localhost python3[44813]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:28 localhost python3[44829]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:28 localhost python3[44845]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:29 localhost python3[44861]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:29 localhost python3[44877]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:29 localhost python3[44893]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:30 localhost python3[44909]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:30 localhost python3[44925]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:31 localhost python3[44941]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Feb 23 02:48:31 localhost python3[44963]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005626465.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 23 02:48:32 localhost python3[44987]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Feb 23 02:48:32 localhost python3[45003]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:48:33 localhost python3[45052]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:48:33 localhost python3[45095]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832912.7833972-77535-197931990885220/source _original_basename=tmphu68nngw follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:33 localhost python3[45125]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Feb 23 02:48:34 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=11 res=1 Feb 23 02:48:34 localhost python3[45217]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:35 localhost python3[45233]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:35 localhost python3[45249]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Feb 23 02:48:36 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=12 res=1 Feb 23 02:48:37 localhost python3[45269]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:48:40 localhost python3[45286]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 02:48:41 localhost python3[45347]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:41 localhost python3[45363]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:48:42 localhost python3[45422]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:48:42 localhost python3[45480]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832921.6149228-77887-189246479428373/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=6259e42dc5b43547cd3a78e5a93c44ff73048322 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:43 localhost python3[45593]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:48:43 localhost python3[45653]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832922.6262596-77937-166375191509946/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:43 localhost python3[45714]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:44 localhost python3[45730]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:44 localhost python3[45761]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:44 localhost python3[45777]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:45 localhost python3[45825]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:48:45 localhost python3[45868]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832925.2740405-78114-10104379625786/source _original_basename=tmppkr19kmy follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:46 localhost python3[45898]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:46 localhost python3[45914]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:48:47 localhost python3[45930]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:48:51 localhost python3[45979]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:48:51 localhost python3[46024]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832930.8096583-78346-93961646057499/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:52 localhost python3[46055]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:48:53 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 23 02:48:53 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 23 02:48:53 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 23 02:48:53 localhost systemd[1]: sshd.service: Consumed 5.337s CPU time, read 1.9M from disk, written 24.0K to disk. Feb 23 02:48:53 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 23 02:48:53 localhost systemd[1]: Stopping sshd-keygen.target... Feb 23 02:48:53 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 02:48:53 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 02:48:53 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 02:48:53 localhost systemd[1]: Reached target sshd-keygen.target. Feb 23 02:48:53 localhost systemd[1]: Starting OpenSSH server daemon... Feb 23 02:48:53 localhost sshd[46059]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:48:53 localhost systemd[1]: Started OpenSSH server daemon. Feb 23 02:48:53 localhost python3[46075]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:48:54 localhost python3[46093]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:48:55 localhost python3[46111]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:48:55 localhost sshd[46113]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:48:58 localhost python3[46162]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:48:59 localhost python3[46180]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:48:59 localhost python3[46210]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:49:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 02:49:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3260 writes, 16K keys, 3260 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3260 writes, 146 syncs, 22.33 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3260 writes, 16K keys, 3260 commit groups, 1.0 writes per commit group, ingest: 14.68 MB, 0.02 MB/s#012Interval WAL: 3260 writes, 146 syncs, 22.33 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55907f80e2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55907f80e2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable Feb 23 02:49:00 localhost python3[46260]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:49:00 localhost python3[46278]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:49:01 localhost python3[46308]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:49:01 localhost systemd[1]: Reloading. Feb 23 02:49:01 localhost systemd-rc-local-generator[46333]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:49:01 localhost systemd-sysv-generator[46339]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:49:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:49:01 localhost systemd[1]: Starting chronyd online sources service... Feb 23 02:49:01 localhost chronyc[46348]: 200 OK Feb 23 02:49:01 localhost systemd[1]: chrony-online.service: Deactivated successfully. Feb 23 02:49:01 localhost systemd[1]: Finished chronyd online sources service. Feb 23 02:49:02 localhost python3[46364]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:49:02 localhost chronyd[26162]: System clock was stepped by 0.000141 seconds Feb 23 02:49:02 localhost python3[46381]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:49:02 localhost python3[46398]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:49:02 localhost chronyd[26162]: System clock was stepped by 0.000000 seconds Feb 23 02:49:03 localhost python3[46415]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:49:03 localhost python3[46432]: ansible-timezone Invoked with name=UTC hwclock=None Feb 23 02:49:03 localhost systemd[1]: Starting Time & Date Service... Feb 23 02:49:03 localhost systemd[1]: Started Time & Date Service. Feb 23 02:49:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 02:49:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3392 writes, 16K keys, 3392 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3392 writes, 200 syncs, 16.96 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3392 writes, 16K keys, 3392 commit groups, 1.0 writes per commit group, ingest: 15.28 MB, 0.03 MB/s#012Interval WAL: 3392 writes, 200 syncs, 16.96 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d239862d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d239862d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable Feb 23 02:49:04 localhost python3[46452]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:49:05 localhost python3[46469]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:49:05 localhost python3[46486]: ansible-slurp Invoked with src=/etc/tuned/active_profile Feb 23 02:49:05 localhost python3[46502]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:49:06 localhost python3[46518]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:49:06 localhost python3[46534]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:49:07 localhost python3[46582]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:49:07 localhost python3[46625]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832946.9402058-79414-225929274131872/source _original_basename=tmp9gs7m783 follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:49:08 localhost python3[46687]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:49:08 localhost python3[46730]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832948.0594738-79517-83992758462689/source _original_basename=tmp7hcauz91 follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:49:09 localhost python3[46760]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 23 02:49:09 localhost systemd[1]: Reloading. Feb 23 02:49:09 localhost systemd-sysv-generator[46793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:49:09 localhost systemd-rc-local-generator[46790]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:49:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:49:09 localhost python3[46814]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:49:10 localhost python3[46830]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:49:10 localhost python3[46847]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:49:10 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Feb 23 02:49:11 localhost python3[46864]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:49:11 localhost python3[46880]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:49:11 localhost python3[46928]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:49:12 localhost python3[46971]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832951.6335857-79693-236682960025284/source _original_basename=tmpm730u3cq follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:49:33 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 23 02:49:33 localhost python3[47001]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 23 02:49:34 localhost python3[47019]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Feb 23 02:49:34 localhost python3[47035]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 02:49:34 localhost python3[47051]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:49:35 localhost python3[47067]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:49:35 localhost python3[47083]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 23 02:49:36 localhost kernel: SELinux: Converting 2707 SID table entries... Feb 23 02:49:36 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 02:49:36 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 02:49:36 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 02:49:36 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 02:49:36 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 02:49:36 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 02:49:36 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 02:49:36 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=13 res=1 Feb 23 02:49:36 localhost python3[47105]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:49:38 localhost python3[47242]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Feb 23 02:49:39 localhost rsyslogd[758]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Feb 23 02:49:39 localhost python3[47258]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 02:49:40 localhost python3[47274]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 23 02:49:40 localhost python3[47290]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n -iNONE', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Feb 23 02:49:45 localhost python3[47400]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:49:45 localhost python3[47443]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771832985.247028-81142-18673271883221/source _original_basename=tmp14rbsjni follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:49:46 localhost python3[47473]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:49:47 localhost sshd[47596]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:49:48 localhost python3[47613]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 02:49:50 localhost python3[47734]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 23 02:49:52 localhost python3[47750]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:49:53 localhost python3[47767]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 02:49:57 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Feb 23 02:49:57 localhost dbus-broker-launch[18488]: Noticed file-system modification, trigger reload. Feb 23 02:49:57 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Feb 23 02:49:57 localhost dbus-broker-launch[18488]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Feb 23 02:49:57 localhost dbus-broker-launch[18488]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Feb 23 02:49:57 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Feb 23 02:49:57 localhost systemd[1]: Reexecuting. Feb 23 02:49:57 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 23 02:49:57 localhost systemd[1]: Detected virtualization kvm. Feb 23 02:49:57 localhost systemd[1]: Detected architecture x86-64. Feb 23 02:49:57 localhost systemd-rc-local-generator[47820]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:49:57 localhost systemd-sysv-generator[47826]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:49:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:50:05 localhost systemd[36097]: Created slice User Background Tasks Slice. Feb 23 02:50:05 localhost systemd[36097]: Starting Cleanup of User's Temporary Files and Directories... Feb 23 02:50:05 localhost systemd[36097]: Finished Cleanup of User's Temporary Files and Directories. Feb 23 02:50:06 localhost kernel: SELinux: Converting 2707 SID table entries... Feb 23 02:50:06 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 02:50:06 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 02:50:06 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 02:50:06 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 02:50:06 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 02:50:06 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 02:50:06 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 02:50:06 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Feb 23 02:50:06 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=14 res=1 Feb 23 02:50:06 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Feb 23 02:50:07 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 02:50:07 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 02:50:07 localhost systemd[1]: Reloading. Feb 23 02:50:07 localhost systemd-rc-local-generator[47935]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:50:07 localhost systemd-sysv-generator[47941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:50:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:50:07 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 02:50:07 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 02:50:07 localhost systemd-journald[618]: Journal stopped Feb 23 02:50:07 localhost systemd[1]: Stopping Journal Service... Feb 23 02:50:07 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Feb 23 02:50:07 localhost systemd-journald[618]: Received SIGTERM from PID 1 (systemd). Feb 23 02:50:07 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Feb 23 02:50:07 localhost systemd[1]: Stopped Journal Service. Feb 23 02:50:07 localhost systemd[1]: systemd-journald.service: Consumed 1.858s CPU time. Feb 23 02:50:07 localhost systemd[1]: Starting Journal Service... Feb 23 02:50:07 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 23 02:50:07 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Feb 23 02:50:07 localhost systemd[1]: systemd-udevd.service: Consumed 3.121s CPU time. Feb 23 02:50:07 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 23 02:50:07 localhost systemd-journald[48305]: Journal started Feb 23 02:50:07 localhost systemd-journald[48305]: Runtime Journal (/run/log/journal/c0212a8b024a111cfc61293864f36c87) is 12.2M, max 314.7M, 302.4M free. Feb 23 02:50:07 localhost systemd[1]: Started Journal Service. Feb 23 02:50:07 localhost systemd-journald[48305]: Field hash table of /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Feb 23 02:50:07 localhost systemd-journald[48305]: /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 02:50:07 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 02:50:07 localhost systemd-udevd[48310]: Using default interface naming scheme 'rhel-9.0'. Feb 23 02:50:07 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 23 02:50:07 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 02:50:07 localhost systemd[1]: Reloading. Feb 23 02:50:07 localhost systemd-rc-local-generator[48827]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:50:07 localhost systemd-sysv-generator[48831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:50:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:50:08 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 02:50:08 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 02:50:08 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 02:50:08 localhost systemd[1]: man-db-cache-update.service: Consumed 1.285s CPU time. Feb 23 02:50:08 localhost systemd[1]: run-refa149bbd02b46ab918ca9861c59a34e.service: Deactivated successfully. Feb 23 02:50:08 localhost systemd[1]: run-r701ff97dcb054ce697cb80ff63e35269.service: Deactivated successfully. Feb 23 02:50:09 localhost python3[49249]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Feb 23 02:50:10 localhost python3[49268]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 02:50:11 localhost python3[49286]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:50:11 localhost python3[49286]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Feb 23 02:50:11 localhost python3[49286]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Feb 23 02:50:20 localhost podman[49298]: 2026-02-23 07:50:11.33728839 +0000 UTC m=+0.030507502 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 23 02:50:20 localhost python3[49286]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 591bb9fb46a70e9f840f28502388406078442df6b6701a3c17990ee75e333673 --format json Feb 23 02:50:20 localhost python3[49401]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:50:20 localhost python3[49401]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Feb 23 02:50:21 localhost python3[49401]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Feb 23 02:50:28 localhost podman[49414]: 2026-02-23 07:50:21.070282837 +0000 UTC m=+0.044194014 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 23 02:50:28 localhost python3[49401]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d59b33e7fb841c47a47a12b18fb68b11debd968b4596c63f3177ecc7400fb1bc --format json Feb 23 02:50:28 localhost python3[49516]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:50:28 localhost python3[49516]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Feb 23 02:50:28 localhost python3[49516]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Feb 23 02:50:42 localhost sshd[50285]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:50:44 localhost sshd[50287]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:50:45 localhost podman[49531]: 2026-02-23 07:50:28.645282445 +0000 UTC m=+0.041682115 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 02:50:45 localhost python3[49516]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 6eddd23e1e6adfbfa713a747123707c02f92ffdbf1913da92f171aba1d6d7856 --format json Feb 23 02:50:45 localhost python3[50317]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:50:45 localhost python3[50317]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Feb 23 02:50:45 localhost python3[50317]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Feb 23 02:50:47 localhost systemd[1]: tmp-crun.rhmfC8.mount: Deactivated successfully. Feb 23 02:50:47 localhost podman[50464]: 2026-02-23 07:50:47.427592193 +0000 UTC m=+0.070206153 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public) Feb 23 02:50:47 localhost podman[50464]: 2026-02-23 07:50:47.52483178 +0000 UTC m=+0.167445790 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1770267347, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7) Feb 23 02:50:57 localhost podman[50330]: 2026-02-23 07:50:45.557870129 +0000 UTC m=+0.029071036 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 23 02:50:57 localhost python3[50317]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 2c8610235afe953aa46efb141a5a988799548b22280d65a7e7ab21889422df37 --format json Feb 23 02:50:57 localhost python3[50648]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:50:57 localhost python3[50648]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Feb 23 02:50:57 localhost python3[50648]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Feb 23 02:51:05 localhost podman[50660]: 2026-02-23 07:50:57.78693319 +0000 UTC m=+0.046660648 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 23 02:51:05 localhost python3[50648]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9ab5aab6d0c3ec80926032b7acf4cec1d4710f1c2daccd17ae4daa64399ec237 --format json Feb 23 02:51:05 localhost python3[50790]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:51:05 localhost python3[50790]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Feb 23 02:51:05 localhost python3[50790]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Feb 23 02:51:10 localhost podman[50802]: 2026-02-23 07:51:05.954878893 +0000 UTC m=+0.044491441 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 23 02:51:10 localhost python3[50790]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 4853142d85dba3766b28d28ae195b26f7242230fe3646e9590a7aee2dc2e0dfa --format json Feb 23 02:51:10 localhost python3[50881]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:51:10 localhost python3[50881]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Feb 23 02:51:10 localhost python3[50881]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Feb 23 02:51:12 localhost podman[50895]: 2026-02-23 07:51:10.765820573 +0000 UTC m=+0.032961667 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 23 02:51:12 localhost python3[50881]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9ac6ea63c0fb4851145e847f9ced2f20804afc8472907b63a82d5866f5cf608a --format json Feb 23 02:51:13 localhost python3[50973]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:51:13 localhost python3[50973]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Feb 23 02:51:13 localhost python3[50973]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Feb 23 02:51:15 localhost podman[50986]: 2026-02-23 07:51:13.354016882 +0000 UTC m=+0.044867074 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 23 02:51:15 localhost python3[50973]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect ba1a08ea1c1207b471b1f02cee16ff456b8a812662cce16906d16de330a66d63 --format json Feb 23 02:51:15 localhost python3[51063]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:51:15 localhost python3[51063]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Feb 23 02:51:15 localhost python3[51063]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Feb 23 02:51:18 localhost podman[51076]: 2026-02-23 07:51:15.842725636 +0000 UTC m=+0.045836723 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Feb 23 02:51:18 localhost python3[51063]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 8576d3a17e57ea28f29435f132f583320941b5aa7bf0aa02e998b09a094d1fe8 --format json Feb 23 02:51:18 localhost python3[51154]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:51:18 localhost python3[51154]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Feb 23 02:51:19 localhost python3[51154]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Feb 23 02:51:22 localhost podman[51166]: 2026-02-23 07:51:19.117383543 +0000 UTC m=+0.038625611 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 23 02:51:22 localhost python3[51154]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 7fcbf63c0504494c8fcaa07583f909a06486472a0982aeac9554c6fdbeb04c9a --format json Feb 23 02:51:23 localhost python3[51254]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 23 02:51:23 localhost python3[51254]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Feb 23 02:51:23 localhost python3[51254]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Feb 23 02:51:25 localhost podman[51266]: 2026-02-23 07:51:23.122087554 +0000 UTC m=+0.042272325 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 23 02:51:25 localhost python3[51254]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 72ddf109f135b64d3116af7b84caaa358dc72e2e60f4c8753fa54fa65b76ba35 --format json Feb 23 02:51:25 localhost python3[51345]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:51:27 localhost ansible-async_wrapper.py[51517]: Invoked with 590950657375 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833086.9075253-83814-158002995374575/AnsiballZ_command.py _ Feb 23 02:51:27 localhost ansible-async_wrapper.py[51520]: Starting module and watcher Feb 23 02:51:27 localhost ansible-async_wrapper.py[51520]: Start watching 51521 (3600) Feb 23 02:51:27 localhost ansible-async_wrapper.py[51521]: Start module (51521) Feb 23 02:51:27 localhost ansible-async_wrapper.py[51517]: Return async_wrapper task started. Feb 23 02:51:27 localhost python3[51541]: ansible-ansible.legacy.async_status Invoked with jid=590950657375.51517 mode=status _async_dir=/tmp/.ansible_async Feb 23 02:51:31 localhost puppet-user[51525]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:51:31 localhost puppet-user[51525]: (file: /etc/puppet/hiera.yaml) Feb 23 02:51:31 localhost puppet-user[51525]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:51:31 localhost puppet-user[51525]: (file & line not available) Feb 23 02:51:31 localhost puppet-user[51525]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:51:31 localhost puppet-user[51525]: (file & line not available) Feb 23 02:51:31 localhost puppet-user[51525]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 23 02:51:31 localhost puppet-user[51525]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 23 02:51:31 localhost puppet-user[51525]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.12 seconds Feb 23 02:51:31 localhost puppet-user[51525]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Feb 23 02:51:31 localhost puppet-user[51525]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Feb 23 02:51:31 localhost puppet-user[51525]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Feb 23 02:51:31 localhost puppet-user[51525]: Notice: Applied catalog in 0.06 seconds Feb 23 02:51:31 localhost puppet-user[51525]: Application: Feb 23 02:51:31 localhost puppet-user[51525]: Initial environment: production Feb 23 02:51:31 localhost puppet-user[51525]: Converged environment: production Feb 23 02:51:31 localhost puppet-user[51525]: Run mode: user Feb 23 02:51:31 localhost puppet-user[51525]: Changes: Feb 23 02:51:31 localhost puppet-user[51525]: Total: 3 Feb 23 02:51:31 localhost puppet-user[51525]: Events: Feb 23 02:51:31 localhost puppet-user[51525]: Success: 3 Feb 23 02:51:31 localhost puppet-user[51525]: Total: 3 Feb 23 02:51:31 localhost puppet-user[51525]: Resources: Feb 23 02:51:31 localhost puppet-user[51525]: Changed: 3 Feb 23 02:51:31 localhost puppet-user[51525]: Out of sync: 3 Feb 23 02:51:31 localhost puppet-user[51525]: Total: 10 Feb 23 02:51:31 localhost puppet-user[51525]: Time: Feb 23 02:51:31 localhost puppet-user[51525]: Schedule: 0.00 Feb 23 02:51:31 localhost puppet-user[51525]: File: 0.00 Feb 23 02:51:31 localhost puppet-user[51525]: Exec: 0.01 Feb 23 02:51:31 localhost puppet-user[51525]: Augeas: 0.03 Feb 23 02:51:31 localhost puppet-user[51525]: Transaction evaluation: 0.05 Feb 23 02:51:31 localhost puppet-user[51525]: Catalog application: 0.06 Feb 23 02:51:31 localhost puppet-user[51525]: Config retrieval: 0.16 Feb 23 02:51:31 localhost puppet-user[51525]: Last run: 1771833091 Feb 23 02:51:31 localhost puppet-user[51525]: Filebucket: 0.00 Feb 23 02:51:31 localhost puppet-user[51525]: Total: 0.06 Feb 23 02:51:31 localhost puppet-user[51525]: Version: Feb 23 02:51:31 localhost puppet-user[51525]: Config: 1771833091 Feb 23 02:51:31 localhost puppet-user[51525]: Puppet: 7.10.0 Feb 23 02:51:31 localhost ansible-async_wrapper.py[51521]: Module complete (51521) Feb 23 02:51:32 localhost ansible-async_wrapper.py[51520]: Done in kid B. Feb 23 02:51:36 localhost sshd[51653]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:51:38 localhost python3[51670]: ansible-ansible.legacy.async_status Invoked with jid=590950657375.51517 mode=status _async_dir=/tmp/.ansible_async Feb 23 02:51:38 localhost python3[51686]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 02:51:39 localhost python3[51702]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:51:39 localhost python3[51750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:51:40 localhost python3[51793]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833099.423203-84217-26909448170086/source _original_basename=tmps2c2gbjk follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 02:51:40 localhost python3[51823]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:51:41 localhost python3[51926]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 23 02:51:42 localhost python3[51945]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 02:51:42 localhost python3[51961]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005626465 step=1 update_config_hash_only=False Feb 23 02:51:43 localhost python3[51977]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:51:43 localhost python3[51993]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 23 02:51:44 localhost python3[52009]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Feb 23 02:51:45 localhost python3[52051]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Feb 23 02:51:45 localhost podman[52236]: 2026-02-23 07:51:45.730509325 +0000 UTC m=+0.060940145 container create ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20260112.1, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Feb 23 02:51:45 localhost podman[52216]: 2026-02-23 07:51:45.744898501 +0000 UTC m=+0.089859132 container create b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, container_name=container-puppet-crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, vcs-type=git, io.buildah.version=1.41.5) Feb 23 02:51:45 localhost podman[52237]: 2026-02-23 07:51:45.764463092 +0000 UTC m=+0.092332851 container create 0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=container-puppet-iscsid, vcs-type=git, release=1766032510, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13) Feb 23 02:51:45 localhost systemd[1]: Started libpod-conmon-b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a.scope. Feb 23 02:51:45 localhost systemd[1]: Started libpod-conmon-ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9.scope. Feb 23 02:51:45 localhost systemd[1]: Started libcrun container. Feb 23 02:51:45 localhost systemd[1]: Started libcrun container. Feb 23 02:51:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b41f8132954705ed575d67743569b519bd384ba1cd0a7dd6be2a8b88add30f0f/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60721aa364e403c543e527da873a8fedfe8e0f7c00041191b7b665e76b6a089a/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:45 localhost podman[52216]: 2026-02-23 07:51:45.795517897 +0000 UTC m=+0.140478528 container init b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=container-puppet-crond, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2026-01-12T22:10:15Z, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 02:51:45 localhost podman[52236]: 2026-02-23 07:51:45.703607311 +0000 UTC m=+0.034038141 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 23 02:51:45 localhost systemd[1]: Started libpod-conmon-0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0.scope. Feb 23 02:51:45 localhost podman[52216]: 2026-02-23 07:51:45.707488634 +0000 UTC m=+0.052449275 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 23 02:51:45 localhost podman[52216]: 2026-02-23 07:51:45.80820594 +0000 UTC m=+0.153166571 container start b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, distribution-scope=public, com.redhat.component=openstack-cron-container, vcs-type=git, url=https://www.redhat.com, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.openshift.expose-services=, io.buildah.version=1.41.5, container_name=container-puppet-crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 02:51:45 localhost podman[52216]: 2026-02-23 07:51:45.808395316 +0000 UTC m=+0.153355947 container attach b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_puppet_step1, architecture=x86_64, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=container-puppet-crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Feb 23 02:51:45 localhost podman[52237]: 2026-02-23 07:51:45.709722495 +0000 UTC m=+0.037592274 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 23 02:51:45 localhost systemd[1]: Started libcrun container. Feb 23 02:51:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f0e2545ff54887bcdef2b1d9264862e245d3a368b2cc9a5a1b6a1cc4de0a6f7/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f0e2545ff54887bcdef2b1d9264862e245d3a368b2cc9a5a1b6a1cc4de0a6f7/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:45 localhost podman[52274]: 2026-02-23 07:51:45.817560137 +0000 UTC m=+0.097386252 container create a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, container_name=container-puppet-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 02:51:45 localhost podman[52256]: 2026-02-23 07:51:45.737524467 +0000 UTC m=+0.046582399 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 02:51:45 localhost systemd[1]: Started libpod-conmon-a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19.scope. Feb 23 02:51:45 localhost systemd[1]: Started libcrun container. Feb 23 02:51:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9bf8a2ab62539fad44af6776c5b72ca8a5c1fb07925c54016685be21bc3780/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:45 localhost podman[52274]: 2026-02-23 07:51:45.779745887 +0000 UTC m=+0.059572032 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 23 02:51:46 localhost podman[52237]: 2026-02-23 07:51:46.249771961 +0000 UTC m=+0.577641750 container init 0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, container_name=container-puppet-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 02:51:46 localhost podman[52237]: 2026-02-23 07:51:46.91318149 +0000 UTC m=+1.241051279 container start 0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-iscsid, container_name=container-puppet-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 02:51:46 localhost podman[52237]: 2026-02-23 07:51:46.913954924 +0000 UTC m=+1.241824773 container attach 0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=container-puppet-iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2026-01-12T22:34:43Z) Feb 23 02:51:46 localhost podman[52274]: 2026-02-23 07:51:46.919565432 +0000 UTC m=+1.199391577 container init a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=container-puppet-collectd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team) Feb 23 02:51:46 localhost podman[52274]: 2026-02-23 07:51:46.932629897 +0000 UTC m=+1.212456022 container start a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.buildah.version=1.41.5, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=container-puppet-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 02:51:46 localhost podman[52274]: 2026-02-23 07:51:46.933224636 +0000 UTC m=+1.213050821 container attach a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, vendor=Red Hat, Inc., container_name=container-puppet-collectd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.13, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 02:51:46 localhost podman[52256]: 2026-02-23 07:51:46.955237064 +0000 UTC m=+1.264294956 container create 848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, config_id=tripleo_puppet_step1, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, url=https://www.redhat.com, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=container-puppet-nova_libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.) Feb 23 02:51:47 localhost systemd[1]: Started libpod-conmon-848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116.scope. Feb 23 02:51:47 localhost podman[52236]: 2026-02-23 07:51:47.003497265 +0000 UTC m=+1.333928115 container init ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, release=1766032510, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=container-puppet-metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z) Feb 23 02:51:47 localhost podman[52236]: 2026-02-23 07:51:47.01435549 +0000 UTC m=+1.344786350 container start ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_puppet_step1, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 02:51:47 localhost podman[52236]: 2026-02-23 07:51:47.014586497 +0000 UTC m=+1.345017357 container attach ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T22:10:14Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=container-puppet-metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, release=1766032510) Feb 23 02:51:47 localhost systemd[1]: Started libcrun container. Feb 23 02:51:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110642599c4a6478b0a235c15bbf13038f23c44c5a0846f1feed95010fbbddf0/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:47 localhost podman[52256]: 2026-02-23 07:51:47.052458549 +0000 UTC m=+1.361516491 container init 848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, build-date=2026-01-12T23:31:49Z, container_name=container-puppet-nova_libvirt, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container) Feb 23 02:51:47 localhost podman[52256]: 2026-02-23 07:51:47.068007632 +0000 UTC m=+1.377065524 container start 848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T23:31:49Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, distribution-scope=public) Feb 23 02:51:47 localhost podman[52256]: 2026-02-23 07:51:47.068285492 +0000 UTC m=+1.377343394 container attach 848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, config_id=tripleo_puppet_step1, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, tcib_managed=true, container_name=container-puppet-nova_libvirt, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 02:51:47 localhost podman[52135]: 2026-02-23 07:51:45.663421776 +0000 UTC m=+0.087872139 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Feb 23 02:51:48 localhost podman[52451]: 2026-02-23 07:51:47.938473582 +0000 UTC m=+0.049288474 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Feb 23 02:51:48 localhost podman[52451]: 2026-02-23 07:51:48.646991833 +0000 UTC m=+0.757806725 container create 40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, container_name=container-puppet-ceilometer, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:24Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-central, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:24Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-central-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64) Feb 23 02:51:48 localhost systemd[1]: Started libpod-conmon-40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe.scope. Feb 23 02:51:48 localhost systemd[1]: tmp-crun.1bxsip.mount: Deactivated successfully. Feb 23 02:51:48 localhost systemd[1]: Started libcrun container. Feb 23 02:51:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/450eedbe86fcc9f6107761683f84ebd064aaee9b19b84656e1d6d114567f2b2e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:48 localhost podman[52451]: 2026-02-23 07:51:48.726056571 +0000 UTC m=+0.836871473 container init 40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:24Z, container_name=container-puppet-ceilometer, description=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:24Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-central, url=https://www.redhat.com) Feb 23 02:51:48 localhost podman[52451]: 2026-02-23 07:51:48.735906594 +0000 UTC m=+0.846721496 container start 40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, description=Red Hat OpenStack Platform 17.1 ceilometer-central, container_name=container-puppet-ceilometer, distribution-scope=public, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:24Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-central, build-date=2026-01-12T23:07:24Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20260112.1) Feb 23 02:51:48 localhost podman[52451]: 2026-02-23 07:51:48.73610279 +0000 UTC m=+0.846917692 container attach 40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, release=1766032510, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-ceilometer-central-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:24Z, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-central, container_name=container-puppet-ceilometer, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, build-date=2026-01-12T23:07:24Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-central) Feb 23 02:51:49 localhost ovs-vsctl[52577]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Feb 23 02:51:49 localhost puppet-user[52366]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:51:49 localhost puppet-user[52366]: (file: /etc/puppet/hiera.yaml) Feb 23 02:51:49 localhost puppet-user[52366]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:51:49 localhost puppet-user[52366]: (file & line not available) Feb 23 02:51:49 localhost puppet-user[52366]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:51:49 localhost puppet-user[52366]: (file & line not available) Feb 23 02:51:49 localhost puppet-user[52366]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.07 seconds Feb 23 02:51:49 localhost puppet-user[52381]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:51:49 localhost puppet-user[52381]: (file: /etc/puppet/hiera.yaml) Feb 23 02:51:49 localhost puppet-user[52381]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:51:49 localhost puppet-user[52381]: (file & line not available) Feb 23 02:51:49 localhost puppet-user[52366]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Feb 23 02:51:49 localhost puppet-user[52364]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:51:49 localhost puppet-user[52364]: (file: /etc/puppet/hiera.yaml) Feb 23 02:51:49 localhost puppet-user[52364]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:51:49 localhost puppet-user[52364]: (file & line not available) Feb 23 02:51:49 localhost puppet-user[52366]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Feb 23 02:51:49 localhost puppet-user[52399]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:51:49 localhost puppet-user[52399]: (file: /etc/puppet/hiera.yaml) Feb 23 02:51:49 localhost puppet-user[52399]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:51:49 localhost puppet-user[52399]: (file & line not available) Feb 23 02:51:49 localhost puppet-user[52381]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:51:49 localhost puppet-user[52381]: (file & line not available) Feb 23 02:51:49 localhost puppet-user[52366]: Notice: Applied catalog in 0.03 seconds Feb 23 02:51:49 localhost puppet-user[52366]: Application: Feb 23 02:51:49 localhost puppet-user[52366]: Initial environment: production Feb 23 02:51:49 localhost puppet-user[52366]: Converged environment: production Feb 23 02:51:49 localhost puppet-user[52366]: Run mode: user Feb 23 02:51:49 localhost puppet-user[52366]: Changes: Feb 23 02:51:49 localhost puppet-user[52366]: Total: 2 Feb 23 02:51:49 localhost puppet-user[52366]: Events: Feb 23 02:51:49 localhost puppet-user[52366]: Success: 2 Feb 23 02:51:49 localhost puppet-user[52366]: Total: 2 Feb 23 02:51:49 localhost puppet-user[52366]: Resources: Feb 23 02:51:49 localhost puppet-user[52366]: Changed: 2 Feb 23 02:51:49 localhost puppet-user[52366]: Out of sync: 2 Feb 23 02:51:49 localhost puppet-user[52366]: Skipped: 7 Feb 23 02:51:49 localhost puppet-user[52366]: Total: 9 Feb 23 02:51:49 localhost puppet-user[52366]: Time: Feb 23 02:51:49 localhost puppet-user[52366]: File: 0.01 Feb 23 02:51:49 localhost puppet-user[52366]: Cron: 0.01 Feb 23 02:51:49 localhost puppet-user[52366]: Transaction evaluation: 0.03 Feb 23 02:51:49 localhost puppet-user[52366]: Catalog application: 0.03 Feb 23 02:51:49 localhost puppet-user[52366]: Config retrieval: 0.10 Feb 23 02:51:49 localhost puppet-user[52366]: Last run: 1771833109 Feb 23 02:51:49 localhost puppet-user[52366]: Total: 0.03 Feb 23 02:51:49 localhost puppet-user[52366]: Version: Feb 23 02:51:49 localhost puppet-user[52366]: Config: 1771833109 Feb 23 02:51:49 localhost puppet-user[52366]: Puppet: 7.10.0 Feb 23 02:51:49 localhost puppet-user[52413]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:51:49 localhost puppet-user[52364]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:51:49 localhost puppet-user[52364]: (file & line not available) Feb 23 02:51:49 localhost puppet-user[52413]: (file: /etc/puppet/hiera.yaml) Feb 23 02:51:49 localhost puppet-user[52413]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:51:49 localhost puppet-user[52413]: (file & line not available) Feb 23 02:51:49 localhost puppet-user[52399]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:51:49 localhost puppet-user[52399]: (file & line not available) Feb 23 02:51:49 localhost puppet-user[52413]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:51:49 localhost puppet-user[52413]: (file & line not available) Feb 23 02:51:49 localhost puppet-user[52399]: Notice: Accepting previously invalid value for target type 'Integer' Feb 23 02:51:49 localhost puppet-user[52364]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.10 seconds Feb 23 02:51:49 localhost puppet-user[52399]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.12 seconds Feb 23 02:51:49 localhost puppet-user[52364]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Feb 23 02:51:49 localhost puppet-user[52364]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Feb 23 02:51:49 localhost puppet-user[52399]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Feb 23 02:51:49 localhost puppet-user[52399]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Feb 23 02:51:49 localhost puppet-user[52399]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Feb 23 02:51:49 localhost puppet-user[52399]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Feb 23 02:51:49 localhost puppet-user[52399]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}920c7dcf44b98bb302e60786ecd624dd4b509c2ba43a90812c4ab6f2b0e4f30e' Feb 23 02:51:49 localhost puppet-user[52399]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Feb 23 02:51:49 localhost puppet-user[52399]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Feb 23 02:51:49 localhost puppet-user[52399]: Notice: Applied catalog in 0.03 seconds Feb 23 02:51:49 localhost puppet-user[52399]: Application: Feb 23 02:51:49 localhost puppet-user[52399]: Initial environment: production Feb 23 02:51:49 localhost puppet-user[52399]: Converged environment: production Feb 23 02:51:49 localhost puppet-user[52399]: Run mode: user Feb 23 02:51:49 localhost puppet-user[52399]: Changes: Feb 23 02:51:49 localhost puppet-user[52399]: Total: 7 Feb 23 02:51:49 localhost puppet-user[52399]: Events: Feb 23 02:51:49 localhost puppet-user[52399]: Success: 7 Feb 23 02:51:49 localhost puppet-user[52399]: Total: 7 Feb 23 02:51:49 localhost puppet-user[52399]: Resources: Feb 23 02:51:49 localhost puppet-user[52399]: Skipped: 13 Feb 23 02:51:49 localhost puppet-user[52399]: Changed: 5 Feb 23 02:51:49 localhost puppet-user[52399]: Out of sync: 5 Feb 23 02:51:49 localhost puppet-user[52399]: Total: 20 Feb 23 02:51:49 localhost puppet-user[52399]: Time: Feb 23 02:51:49 localhost puppet-user[52399]: File: 0.01 Feb 23 02:51:49 localhost puppet-user[52399]: Transaction evaluation: 0.03 Feb 23 02:51:49 localhost puppet-user[52399]: Catalog application: 0.03 Feb 23 02:51:49 localhost puppet-user[52399]: Config retrieval: 0.15 Feb 23 02:51:49 localhost puppet-user[52399]: Last run: 1771833109 Feb 23 02:51:49 localhost puppet-user[52399]: Total: 0.03 Feb 23 02:51:49 localhost puppet-user[52399]: Version: Feb 23 02:51:49 localhost puppet-user[52399]: Config: 1771833109 Feb 23 02:51:49 localhost puppet-user[52399]: Puppet: 7.10.0 Feb 23 02:51:49 localhost puppet-user[52364]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Feb 23 02:51:49 localhost puppet-user[52413]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Feb 23 02:51:49 localhost puppet-user[52413]: in a future release. Use nova::cinder::os_region_name instead Feb 23 02:51:49 localhost puppet-user[52413]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Feb 23 02:51:49 localhost puppet-user[52413]: in a future release. Use nova::cinder::catalog_info instead Feb 23 02:51:49 localhost puppet-user[52381]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.32 seconds Feb 23 02:51:49 localhost systemd[1]: libpod-b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a.scope: Deactivated successfully. Feb 23 02:51:49 localhost systemd[1]: libpod-b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a.scope: Consumed 2.062s CPU time. Feb 23 02:51:49 localhost podman[52216]: 2026-02-23 07:51:49.641979433 +0000 UTC m=+3.986940074 container died b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, container_name=container-puppet-crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, config_id=tripleo_puppet_step1, vcs-type=git) Feb 23 02:51:49 localhost puppet-user[52413]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Feb 23 02:51:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a-userdata-shm.mount: Deactivated successfully. Feb 23 02:51:49 localhost systemd[1]: var-lib-containers-storage-overlay-b41f8132954705ed575d67743569b519bd384ba1cd0a7dd6be2a8b88add30f0f-merged.mount: Deactivated successfully. Feb 23 02:51:49 localhost podman[52852]: 2026-02-23 07:51:49.751696564 +0000 UTC m=+0.097440622 container cleanup b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, container_name=container-puppet-crond, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, config_id=tripleo_puppet_step1, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 02:51:49 localhost systemd[1]: libpod-conmon-b17e768e4d72646614de8ffd9b7281b400eb26338200f624e1da9accdab7fa9a.scope: Deactivated successfully. Feb 23 02:51:49 localhost python3[52051]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005626465 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 23 02:51:49 localhost systemd[1]: libpod-ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9.scope: Deactivated successfully. Feb 23 02:51:49 localhost systemd[1]: libpod-ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9.scope: Consumed 2.181s CPU time. Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Feb 23 02:51:49 localhost podman[52236]: 2026-02-23 07:51:49.788012016 +0000 UTC m=+4.118442836 container died ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, release=1766032510, distribution-scope=public, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13) Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Feb 23 02:51:49 localhost puppet-user[52413]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Feb 23 02:51:49 localhost puppet-user[52413]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Feb 23 02:51:49 localhost puppet-user[52413]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Feb 23 02:51:49 localhost puppet-user[52413]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Feb 23 02:51:49 localhost systemd[1]: tmp-crun.ORFbU6.mount: Deactivated successfully. Feb 23 02:51:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9-userdata-shm.mount: Deactivated successfully. Feb 23 02:51:49 localhost systemd[1]: var-lib-containers-storage-overlay-60721aa364e403c543e527da873a8fedfe8e0f7c00041191b7b665e76b6a089a-merged.mount: Deactivated successfully. Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Feb 23 02:51:49 localhost podman[52899]: 2026-02-23 07:51:49.860352492 +0000 UTC m=+0.062977029 container cleanup ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Feb 23 02:51:49 localhost systemd[1]: libpod-conmon-ff8c58555bb547b7c76062b5e01d16b5bbb198a3e8452f62134828965bf5e2b9.scope: Deactivated successfully. Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Feb 23 02:51:49 localhost puppet-user[52413]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Feb 23 02:51:49 localhost puppet-user[52413]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Feb 23 02:51:49 localhost python3[52051]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005626465 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}0d4e701b7b2398bbf396579a0713d46d3c496c79edc52f2e260456f359c9a46c' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Feb 23 02:51:49 localhost puppet-user[52364]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52364]: Notice: Applied catalog in 0.49 seconds Feb 23 02:51:49 localhost puppet-user[52364]: Application: Feb 23 02:51:49 localhost puppet-user[52364]: Initial environment: production Feb 23 02:51:49 localhost puppet-user[52364]: Converged environment: production Feb 23 02:51:49 localhost puppet-user[52364]: Run mode: user Feb 23 02:51:49 localhost puppet-user[52364]: Changes: Feb 23 02:51:49 localhost puppet-user[52364]: Total: 4 Feb 23 02:51:49 localhost puppet-user[52364]: Events: Feb 23 02:51:49 localhost puppet-user[52364]: Success: 4 Feb 23 02:51:49 localhost puppet-user[52364]: Total: 4 Feb 23 02:51:49 localhost puppet-user[52364]: Resources: Feb 23 02:51:49 localhost puppet-user[52364]: Changed: 4 Feb 23 02:51:49 localhost puppet-user[52364]: Out of sync: 4 Feb 23 02:51:49 localhost puppet-user[52364]: Skipped: 8 Feb 23 02:51:49 localhost puppet-user[52364]: Total: 13 Feb 23 02:51:49 localhost puppet-user[52364]: Time: Feb 23 02:51:49 localhost puppet-user[52364]: File: 0.00 Feb 23 02:51:49 localhost puppet-user[52364]: Exec: 0.06 Feb 23 02:51:49 localhost puppet-user[52364]: Config retrieval: 0.13 Feb 23 02:51:49 localhost puppet-user[52364]: Augeas: 0.42 Feb 23 02:51:49 localhost puppet-user[52364]: Transaction evaluation: 0.49 Feb 23 02:51:49 localhost puppet-user[52364]: Catalog application: 0.49 Feb 23 02:51:49 localhost puppet-user[52364]: Last run: 1771833109 Feb 23 02:51:49 localhost puppet-user[52364]: Total: 0.49 Feb 23 02:51:49 localhost puppet-user[52364]: Version: Feb 23 02:51:49 localhost puppet-user[52364]: Config: 1771833109 Feb 23 02:51:49 localhost puppet-user[52364]: Puppet: 7.10.0 Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Feb 23 02:51:49 localhost puppet-user[52381]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Feb 23 02:51:49 localhost puppet-user[52381]: Notice: Applied catalog in 0.24 seconds Feb 23 02:51:49 localhost puppet-user[52381]: Application: Feb 23 02:51:49 localhost puppet-user[52381]: Initial environment: production Feb 23 02:51:49 localhost puppet-user[52381]: Converged environment: production Feb 23 02:51:49 localhost puppet-user[52381]: Run mode: user Feb 23 02:51:49 localhost puppet-user[52381]: Changes: Feb 23 02:51:49 localhost puppet-user[52381]: Total: 43 Feb 23 02:51:49 localhost puppet-user[52381]: Events: Feb 23 02:51:49 localhost puppet-user[52381]: Success: 43 Feb 23 02:51:49 localhost puppet-user[52381]: Total: 43 Feb 23 02:51:49 localhost puppet-user[52381]: Resources: Feb 23 02:51:49 localhost puppet-user[52381]: Skipped: 14 Feb 23 02:51:49 localhost puppet-user[52381]: Changed: 38 Feb 23 02:51:49 localhost puppet-user[52381]: Out of sync: 38 Feb 23 02:51:49 localhost puppet-user[52381]: Total: 82 Feb 23 02:51:49 localhost puppet-user[52381]: Time: Feb 23 02:51:49 localhost puppet-user[52381]: Concat fragment: 0.00 Feb 23 02:51:49 localhost puppet-user[52381]: Concat file: 0.00 Feb 23 02:51:49 localhost puppet-user[52381]: File: 0.10 Feb 23 02:51:49 localhost puppet-user[52381]: Transaction evaluation: 0.23 Feb 23 02:51:49 localhost puppet-user[52381]: Catalog application: 0.24 Feb 23 02:51:49 localhost puppet-user[52381]: Config retrieval: 0.45 Feb 23 02:51:49 localhost puppet-user[52381]: Last run: 1771833109 Feb 23 02:51:49 localhost puppet-user[52381]: Total: 0.24 Feb 23 02:51:49 localhost puppet-user[52381]: Version: Feb 23 02:51:49 localhost puppet-user[52381]: Config: 1771833109 Feb 23 02:51:49 localhost puppet-user[52381]: Puppet: 7.10.0 Feb 23 02:51:50 localhost puppet-user[52413]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Feb 23 02:51:50 localhost podman[52999]: 2026-02-23 07:51:50.159125862 +0000 UTC m=+0.069018751 container create d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, build-date=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, release=1766032510, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_puppet_step1, version=17.1.13) Feb 23 02:51:50 localhost systemd[1]: Started libpod-conmon-d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79.scope. Feb 23 02:51:50 localhost systemd[1]: Started libcrun container. Feb 23 02:51:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd0911b56d0bc006c9d6e08afe1eddab8140e9e2da5f495d096c1e34b92a334/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:50 localhost podman[52999]: 2026-02-23 07:51:50.201655951 +0000 UTC m=+0.111548850 container init d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:09Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_id=tripleo_puppet_step1, container_name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13) Feb 23 02:51:50 localhost podman[52999]: 2026-02-23 07:51:50.207951231 +0000 UTC m=+0.117844130 container start d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.created=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=container-puppet-rsyslog, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp-rhel9/openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible) Feb 23 02:51:50 localhost podman[52999]: 2026-02-23 07:51:50.208129187 +0000 UTC m=+0.118022106 container attach d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, container_name=container-puppet-rsyslog, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:09Z, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-rsyslog, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team) Feb 23 02:51:50 localhost podman[52999]: 2026-02-23 07:51:50.120563798 +0000 UTC m=+0.030456717 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 23 02:51:50 localhost systemd[1]: libpod-0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0.scope: Deactivated successfully. Feb 23 02:51:50 localhost systemd[1]: libpod-0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0.scope: Consumed 2.630s CPU time. Feb 23 02:51:50 localhost podman[52237]: 2026-02-23 07:51:50.263000228 +0000 UTC m=+4.590870017 container died 0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, release=1766032510, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=container-puppet-iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Feb 23 02:51:50 localhost podman[53044]: 2026-02-23 07:51:50.279007766 +0000 UTC m=+0.096532424 container create 849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.buildah.version=1.41.5, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=container-puppet-ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 02:51:50 localhost podman[53044]: 2026-02-23 07:51:50.217217716 +0000 UTC m=+0.034742384 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 23 02:51:50 localhost systemd[1]: Started libpod-conmon-849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62.scope. Feb 23 02:51:50 localhost systemd[1]: Started libcrun container. Feb 23 02:51:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777f9dc182dfb66820f6f3bb603d28f3bfe9e16068876bcac539a1ae5bbfe63b/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777f9dc182dfb66820f6f3bb603d28f3bfe9e16068876bcac539a1ae5bbfe63b/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:50 localhost podman[53044]: 2026-02-23 07:51:50.338254256 +0000 UTC m=+0.155778924 container init 849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, container_name=container-puppet-ovn_controller, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 23 02:51:50 localhost podman[53044]: 2026-02-23 07:51:50.347705255 +0000 UTC m=+0.165229913 container start 849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, version=17.1.13, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=container-puppet-ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 02:51:50 localhost podman[53044]: 2026-02-23 07:51:50.347960143 +0000 UTC m=+0.165484831 container attach 849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, container_name=container-puppet-ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 02:51:50 localhost podman[53088]: 2026-02-23 07:51:50.354653116 +0000 UTC m=+0.081465245 container cleanup 0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=) Feb 23 02:51:50 localhost systemd[1]: libpod-conmon-0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0.scope: Deactivated successfully. Feb 23 02:51:50 localhost python3[52051]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005626465 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 23 02:51:50 localhost systemd[1]: libpod-a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19.scope: Deactivated successfully. Feb 23 02:51:50 localhost systemd[1]: libpod-a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19.scope: Consumed 2.747s CPU time. Feb 23 02:51:50 localhost podman[53158]: 2026-02-23 07:51:50.450144656 +0000 UTC m=+0.044658298 container died a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=container-puppet-collectd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_id=tripleo_puppet_step1, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 02:51:50 localhost podman[53158]: 2026-02-23 07:51:50.470021247 +0000 UTC m=+0.064534859 container cleanup a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=container-puppet-collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z) Feb 23 02:51:50 localhost systemd[1]: libpod-conmon-a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19.scope: Deactivated successfully. Feb 23 02:51:50 localhost python3[52051]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005626465 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 23 02:51:50 localhost puppet-user[52413]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 1.30 seconds Feb 23 02:51:50 localhost puppet-user[52501]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:51:50 localhost puppet-user[52501]: (file: /etc/puppet/hiera.yaml) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:51:50 localhost puppet-user[52501]: (file & line not available) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:51:50 localhost puppet-user[52501]: (file & line not available) Feb 23 02:51:50 localhost systemd[1]: var-lib-containers-storage-overlay-cb9bf8a2ab62539fad44af6776c5b72ca8a5c1fb07925c54016685be21bc3780-merged.mount: Deactivated successfully. Feb 23 02:51:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a712678446604c3604e6eb135a6be975e1d145b69a91b52cf5e7d8b17d0ccd19-userdata-shm.mount: Deactivated successfully. Feb 23 02:51:50 localhost systemd[1]: var-lib-containers-storage-overlay-1f0e2545ff54887bcdef2b1d9264862e245d3a368b2cc9a5a1b6a1cc4de0a6f7-merged.mount: Deactivated successfully. Feb 23 02:51:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ba329af66f3482f9e751bed319efb80b9fa7e2b8d50be0d27a8497b199e4da0-userdata-shm.mount: Deactivated successfully. Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Feb 23 02:51:50 localhost puppet-user[52413]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}489b6455d50f9ee989125e261ff880fe0ce273a5c46439278b09842d2e1f5116' Feb 23 02:51:50 localhost puppet-user[52413]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Feb 23 02:51:50 localhost puppet-user[52413]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Feb 23 02:51:50 localhost puppet-user[52413]: Warning: Empty environment setting 'TLS_PASSWORD' Feb 23 02:51:50 localhost puppet-user[52413]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Feb 23 02:51:50 localhost puppet-user[52501]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Feb 23 02:51:50 localhost puppet-user[52413]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Feb 23 02:51:50 localhost puppet-user[52413]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}8656b3c96dc5b23eeff252eb63947bbb521645e181af749f7bc85fd2f92d7747' Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.37 seconds Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Notice: Applied catalog in 0.41 seconds Feb 23 02:51:51 localhost puppet-user[52501]: Application: Feb 23 02:51:51 localhost puppet-user[52501]: Initial environment: production Feb 23 02:51:51 localhost puppet-user[52501]: Converged environment: production Feb 23 02:51:51 localhost puppet-user[52501]: Run mode: user Feb 23 02:51:51 localhost puppet-user[52501]: Changes: Feb 23 02:51:51 localhost puppet-user[52501]: Total: 31 Feb 23 02:51:51 localhost puppet-user[52501]: Events: Feb 23 02:51:51 localhost puppet-user[52501]: Success: 31 Feb 23 02:51:51 localhost puppet-user[52501]: Total: 31 Feb 23 02:51:51 localhost puppet-user[52501]: Resources: Feb 23 02:51:51 localhost puppet-user[52501]: Skipped: 22 Feb 23 02:51:51 localhost puppet-user[52501]: Changed: 31 Feb 23 02:51:51 localhost puppet-user[52501]: Out of sync: 31 Feb 23 02:51:51 localhost puppet-user[52501]: Total: 151 Feb 23 02:51:51 localhost puppet-user[52501]: Time: Feb 23 02:51:51 localhost puppet-user[52501]: Package: 0.02 Feb 23 02:51:51 localhost puppet-user[52501]: Ceilometer config: 0.31 Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Feb 23 02:51:51 localhost puppet-user[52501]: Transaction evaluation: 0.41 Feb 23 02:51:51 localhost puppet-user[52501]: Catalog application: 0.41 Feb 23 02:51:51 localhost puppet-user[52501]: Config retrieval: 0.44 Feb 23 02:51:51 localhost puppet-user[52501]: Last run: 1771833111 Feb 23 02:51:51 localhost puppet-user[52501]: Resources: 0.00 Feb 23 02:51:51 localhost puppet-user[52501]: Total: 0.41 Feb 23 02:51:51 localhost puppet-user[52501]: Version: Feb 23 02:51:51 localhost puppet-user[52501]: Config: 1771833110 Feb 23 02:51:51 localhost puppet-user[52501]: Puppet: 7.10.0 Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Feb 23 02:51:51 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Feb 23 02:51:52 localhost systemd[1]: libpod-40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe.scope: Deactivated successfully. Feb 23 02:51:52 localhost systemd[1]: libpod-40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe.scope: Consumed 3.013s CPU time. Feb 23 02:51:52 localhost podman[52451]: 2026-02-23 07:51:52.0175974 +0000 UTC m=+4.128412302 container died 40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-central-container, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-central, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:24Z, architecture=x86_64, build-date=2026-01-12T23:07:24Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=container-puppet-ceilometer, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.13, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, release=1766032510, io.openshift.expose-services=) Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Feb 23 02:51:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe-userdata-shm.mount: Deactivated successfully. Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Feb 23 02:51:52 localhost systemd[1]: var-lib-containers-storage-overlay-450eedbe86fcc9f6107761683f84ebd064aaee9b19b84656e1d6d114567f2b2e-merged.mount: Deactivated successfully. Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Feb 23 02:51:52 localhost podman[53366]: 2026-02-23 07:51:52.137721532 +0000 UTC m=+0.107480441 container cleanup 40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, name=rhosp-rhel9/openstack-ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2026-01-12T23:07:24Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:24Z, version=17.1.13, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-central-container, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, container_name=container-puppet-ceilometer, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 02:51:52 localhost systemd[1]: libpod-conmon-40156f33c18a3a7b560639fa31b493debeae6a2393dae3dce7053fe620b00ffe.scope: Deactivated successfully. Feb 23 02:51:52 localhost python3[52051]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005626465 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Feb 23 02:51:52 localhost puppet-user[53090]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:51:52 localhost puppet-user[53090]: (file: /etc/puppet/hiera.yaml) Feb 23 02:51:52 localhost puppet-user[53090]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:51:52 localhost puppet-user[53090]: (file & line not available) Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Feb 23 02:51:52 localhost puppet-user[53090]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:51:52 localhost puppet-user[53090]: (file & line not available) Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Feb 23 02:51:52 localhost puppet-user[53146]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:51:52 localhost puppet-user[53146]: (file: /etc/puppet/hiera.yaml) Feb 23 02:51:52 localhost puppet-user[53146]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:51:52 localhost puppet-user[53146]: (file & line not available) Feb 23 02:51:52 localhost puppet-user[53146]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:51:52 localhost puppet-user[53146]: (file & line not available) Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Feb 23 02:51:52 localhost puppet-user[53090]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.22 seconds Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Feb 23 02:51:52 localhost puppet-user[53090]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Feb 23 02:51:52 localhost puppet-user[53090]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Feb 23 02:51:52 localhost puppet-user[53090]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}2fe0c95cd2f6dab83675b4452ccebb880be50305f7ad649e1cf42fe65fffbfbc' Feb 23 02:51:52 localhost puppet-user[53146]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.28 seconds Feb 23 02:51:52 localhost puppet-user[53090]: Notice: Applied catalog in 0.10 seconds Feb 23 02:51:52 localhost puppet-user[53090]: Application: Feb 23 02:51:52 localhost puppet-user[53090]: Initial environment: production Feb 23 02:51:52 localhost puppet-user[53090]: Converged environment: production Feb 23 02:51:52 localhost puppet-user[53090]: Run mode: user Feb 23 02:51:52 localhost puppet-user[53090]: Changes: Feb 23 02:51:52 localhost puppet-user[53090]: Total: 3 Feb 23 02:51:52 localhost puppet-user[53090]: Events: Feb 23 02:51:52 localhost puppet-user[53090]: Success: 3 Feb 23 02:51:52 localhost puppet-user[53090]: Total: 3 Feb 23 02:51:52 localhost puppet-user[53090]: Resources: Feb 23 02:51:52 localhost puppet-user[53090]: Skipped: 11 Feb 23 02:51:52 localhost puppet-user[53090]: Changed: 3 Feb 23 02:51:52 localhost puppet-user[53090]: Out of sync: 3 Feb 23 02:51:52 localhost puppet-user[53090]: Total: 25 Feb 23 02:51:52 localhost puppet-user[53090]: Time: Feb 23 02:51:52 localhost puppet-user[53090]: Concat file: 0.00 Feb 23 02:51:52 localhost puppet-user[53090]: Concat fragment: 0.00 Feb 23 02:51:52 localhost puppet-user[53090]: File: 0.01 Feb 23 02:51:52 localhost puppet-user[53090]: Transaction evaluation: 0.10 Feb 23 02:51:52 localhost puppet-user[53090]: Catalog application: 0.10 Feb 23 02:51:52 localhost puppet-user[53090]: Config retrieval: 0.27 Feb 23 02:51:52 localhost puppet-user[53090]: Last run: 1771833112 Feb 23 02:51:52 localhost puppet-user[53090]: Total: 0.10 Feb 23 02:51:52 localhost puppet-user[53090]: Version: Feb 23 02:51:52 localhost puppet-user[53090]: Config: 1771833112 Feb 23 02:51:52 localhost puppet-user[53090]: Puppet: 7.10.0 Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53520]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53523]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}9109162380f9c461e3d8ec780edb8a48cdd59dabd84e70a5fe7d1088fe416c1b' Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53526]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.107 Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53532]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005626465.localdomain Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005626465.novalocal' to 'np0005626465.localdomain' Feb 23 02:51:52 localhost ovs-vsctl[53545]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53558]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53563]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53565]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53572]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Feb 23 02:51:52 localhost systemd[1]: libpod-d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79.scope: Deactivated successfully. Feb 23 02:51:52 localhost systemd[1]: libpod-d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79.scope: Consumed 2.364s CPU time. Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Feb 23 02:51:52 localhost podman[52999]: 2026-02-23 07:51:52.808163764 +0000 UTC m=+2.718056683 container died d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:09Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2026-01-12T22:10:09Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53582]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53590]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:e3:a4:40 Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53593]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53595]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Feb 23 02:51:52 localhost ovs-vsctl[53597]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Feb 23 02:51:52 localhost puppet-user[53146]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Feb 23 02:51:52 localhost puppet-user[53146]: Notice: Applied catalog in 0.45 seconds Feb 23 02:51:52 localhost puppet-user[53146]: Application: Feb 23 02:51:52 localhost puppet-user[53146]: Initial environment: production Feb 23 02:51:52 localhost puppet-user[53146]: Converged environment: production Feb 23 02:51:52 localhost puppet-user[53146]: Run mode: user Feb 23 02:51:52 localhost puppet-user[53146]: Changes: Feb 23 02:51:52 localhost puppet-user[53146]: Total: 14 Feb 23 02:51:52 localhost puppet-user[53146]: Events: Feb 23 02:51:52 localhost puppet-user[53146]: Success: 14 Feb 23 02:51:52 localhost puppet-user[53146]: Total: 14 Feb 23 02:51:52 localhost puppet-user[53146]: Resources: Feb 23 02:51:52 localhost puppet-user[53146]: Skipped: 12 Feb 23 02:51:52 localhost puppet-user[53146]: Changed: 14 Feb 23 02:51:52 localhost puppet-user[53146]: Out of sync: 14 Feb 23 02:51:52 localhost puppet-user[53146]: Total: 29 Feb 23 02:51:52 localhost puppet-user[53146]: Time: Feb 23 02:51:52 localhost puppet-user[53146]: Exec: 0.01 Feb 23 02:51:52 localhost puppet-user[53146]: Config retrieval: 0.31 Feb 23 02:51:52 localhost puppet-user[53146]: Vs config: 0.34 Feb 23 02:51:52 localhost puppet-user[53146]: Transaction evaluation: 0.43 Feb 23 02:51:52 localhost puppet-user[53146]: Catalog application: 0.45 Feb 23 02:51:52 localhost puppet-user[53146]: Last run: 1771833112 Feb 23 02:51:52 localhost puppet-user[53146]: Total: 0.45 Feb 23 02:51:52 localhost puppet-user[53146]: Version: Feb 23 02:51:52 localhost puppet-user[53146]: Config: 1771833112 Feb 23 02:51:52 localhost puppet-user[53146]: Puppet: 7.10.0 Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Feb 23 02:51:52 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Feb 23 02:51:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79-userdata-shm.mount: Deactivated successfully. Feb 23 02:51:53 localhost systemd[1]: var-lib-containers-storage-overlay-7cd0911b56d0bc006c9d6e08afe1eddab8140e9e2da5f495d096c1e34b92a334-merged.mount: Deactivated successfully. Feb 23 02:51:53 localhost systemd[1]: libpod-849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62.scope: Deactivated successfully. Feb 23 02:51:53 localhost systemd[1]: libpod-849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62.scope: Consumed 2.876s CPU time. Feb 23 02:51:53 localhost podman[53044]: 2026-02-23 07:51:53.497347632 +0000 UTC m=+3.314872400 container died 849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., container_name=container-puppet-ovn_controller, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5) Feb 23 02:51:53 localhost podman[53581]: 2026-02-23 07:51:53.630823307 +0000 UTC m=+0.812983366 container cleanup d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, container_name=container-puppet-rsyslog, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog) Feb 23 02:51:53 localhost python3[52051]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005626465 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 23 02:51:53 localhost systemd[1]: libpod-conmon-d41272a0059105a5470391f7e86066b75f5d4995d5bd6e7403007a2cce5c7d79.scope: Deactivated successfully. Feb 23 02:51:53 localhost podman[53215]: 2026-02-23 07:51:50.622107512 +0000 UTC m=+0.034954770 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Feb 23 02:51:53 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Feb 23 02:51:53 localhost podman[53674]: 2026-02-23 07:51:53.774534217 +0000 UTC m=+0.270266836 container cleanup 849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, managed_by=tripleo_ansible, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=container-puppet-ovn_controller, build-date=2026-01-12T22:36:40Z) Feb 23 02:51:53 localhost systemd[1]: libpod-conmon-849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62.scope: Deactivated successfully. Feb 23 02:51:53 localhost python3[52051]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005626465 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 23 02:51:53 localhost podman[53738]: 2026-02-23 07:51:53.854953229 +0000 UTC m=+0.068766763 container create 692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-neutron-server, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, container_name=container-puppet-neutron, release=1766032510, batch=17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:57:35Z, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:57:35Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=) Feb 23 02:51:53 localhost systemd[1]: Started libpod-conmon-692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f.scope. Feb 23 02:51:53 localhost systemd[1]: Started libcrun container. Feb 23 02:51:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc3bf514cb0d82613183a63c18e99543a6861d40c81ca65529bad1fd0fed9a8/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 23 02:51:53 localhost podman[53738]: 2026-02-23 07:51:53.910365907 +0000 UTC m=+0.124179451 container init 692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1766032510, com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, managed_by=tripleo_ansible, container_name=container-puppet-neutron, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:57:35Z, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.13, url=https://www.redhat.com, batch=17.1_20260112.1, build-date=2026-01-12T22:57:35Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-server, tcib_managed=true) Feb 23 02:51:53 localhost podman[53738]: 2026-02-23 07:51:53.822819889 +0000 UTC m=+0.036633423 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Feb 23 02:51:53 localhost podman[53738]: 2026-02-23 07:51:53.928776861 +0000 UTC m=+0.142590405 container start 692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-server, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, distribution-scope=public, config_id=tripleo_puppet_step1, container_name=container-puppet-neutron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, tcib_managed=true, build-date=2026-01-12T22:57:35Z, org.opencontainers.image.created=2026-01-12T22:57:35Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 02:51:53 localhost podman[53738]: 2026-02-23 07:51:53.929460483 +0000 UTC m=+0.143274027 container attach 692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, vendor=Red Hat, Inc., build-date=2026-01-12T22:57:35Z, summary=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, io.openshift.expose-services=, container_name=container-puppet-neutron, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-server, release=1766032510, com.redhat.component=openstack-neutron-server-container, batch=17.1_20260112.1, config_id=tripleo_puppet_step1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:57:35Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server) Feb 23 02:51:54 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Feb 23 02:51:54 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Feb 23 02:51:54 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Feb 23 02:51:54 localhost systemd[1]: tmp-crun.aJCLWN.mount: Deactivated successfully. Feb 23 02:51:54 localhost systemd[1]: var-lib-containers-storage-overlay-777f9dc182dfb66820f6f3bb603d28f3bfe9e16068876bcac539a1ae5bbfe63b-merged.mount: Deactivated successfully. Feb 23 02:51:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-849ba28d6618af7e7ffcca78f0027564fe4ce7fbd97381159704fb2e7d945f62-userdata-shm.mount: Deactivated successfully. Feb 23 02:51:54 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Feb 23 02:51:54 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Feb 23 02:51:54 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Feb 23 02:51:54 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Feb 23 02:51:54 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Feb 23 02:51:54 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Feb 23 02:51:54 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Feb 23 02:51:55 localhost puppet-user[52413]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3a12438802493a75725c4f7704f2af6db1ef72af396369e5de28f6f4d6a7ed98' Feb 23 02:51:55 localhost puppet-user[52413]: Notice: Applied catalog in 4.42 seconds Feb 23 02:51:55 localhost puppet-user[52413]: Application: Feb 23 02:51:55 localhost puppet-user[52413]: Initial environment: production Feb 23 02:51:55 localhost puppet-user[52413]: Converged environment: production Feb 23 02:51:55 localhost puppet-user[52413]: Run mode: user Feb 23 02:51:55 localhost puppet-user[52413]: Changes: Feb 23 02:51:55 localhost puppet-user[52413]: Total: 183 Feb 23 02:51:55 localhost puppet-user[52413]: Events: Feb 23 02:51:55 localhost puppet-user[52413]: Success: 183 Feb 23 02:51:55 localhost puppet-user[52413]: Total: 183 Feb 23 02:51:55 localhost puppet-user[52413]: Resources: Feb 23 02:51:55 localhost puppet-user[52413]: Changed: 183 Feb 23 02:51:55 localhost puppet-user[52413]: Out of sync: 183 Feb 23 02:51:55 localhost puppet-user[52413]: Skipped: 57 Feb 23 02:51:55 localhost puppet-user[52413]: Total: 487 Feb 23 02:51:55 localhost puppet-user[52413]: Time: Feb 23 02:51:55 localhost puppet-user[52413]: Concat file: 0.00 Feb 23 02:51:55 localhost puppet-user[52413]: Concat fragment: 0.00 Feb 23 02:51:55 localhost puppet-user[52413]: Anchor: 0.00 Feb 23 02:51:55 localhost puppet-user[52413]: File line: 0.00 Feb 23 02:51:55 localhost puppet-user[52413]: Virtlogd config: 0.00 Feb 23 02:51:55 localhost puppet-user[52413]: Virtstoraged config: 0.01 Feb 23 02:51:55 localhost puppet-user[52413]: Exec: 0.01 Feb 23 02:51:55 localhost puppet-user[52413]: Virtsecretd config: 0.02 Feb 23 02:51:55 localhost puppet-user[52413]: Virtqemud config: 0.02 Feb 23 02:51:55 localhost puppet-user[52413]: Virtnodedevd config: 0.02 Feb 23 02:51:55 localhost puppet-user[52413]: Package: 0.03 Feb 23 02:51:55 localhost puppet-user[52413]: File: 0.03 Feb 23 02:51:55 localhost puppet-user[52413]: Virtproxyd config: 0.04 Feb 23 02:51:55 localhost puppet-user[52413]: Augeas: 1.05 Feb 23 02:51:55 localhost puppet-user[52413]: Config retrieval: 1.53 Feb 23 02:51:55 localhost puppet-user[52413]: Last run: 1771833115 Feb 23 02:51:55 localhost puppet-user[52413]: Nova config: 2.99 Feb 23 02:51:55 localhost puppet-user[52413]: Transaction evaluation: 4.40 Feb 23 02:51:55 localhost puppet-user[52413]: Catalog application: 4.42 Feb 23 02:51:55 localhost puppet-user[52413]: Resources: 0.00 Feb 23 02:51:55 localhost puppet-user[52413]: Total: 4.42 Feb 23 02:51:55 localhost puppet-user[52413]: Version: Feb 23 02:51:55 localhost puppet-user[52413]: Config: 1771833109 Feb 23 02:51:55 localhost puppet-user[52413]: Puppet: 7.10.0 Feb 23 02:51:55 localhost puppet-user[53805]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Feb 23 02:51:55 localhost puppet-user[53805]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:51:55 localhost puppet-user[53805]: (file: /etc/puppet/hiera.yaml) Feb 23 02:51:55 localhost puppet-user[53805]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:51:55 localhost puppet-user[53805]: (file & line not available) Feb 23 02:51:55 localhost puppet-user[53805]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:51:55 localhost puppet-user[53805]: (file & line not available) Feb 23 02:51:55 localhost puppet-user[53805]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Feb 23 02:51:56 localhost systemd[1]: libpod-848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116.scope: Deactivated successfully. Feb 23 02:51:56 localhost systemd[1]: libpod-848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116.scope: Consumed 8.574s CPU time. Feb 23 02:51:56 localhost podman[53942]: 2026-02-23 07:51:56.345257464 +0000 UTC m=+0.043647706 container died 848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, config_id=tripleo_puppet_step1, container_name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, version=17.1.13) Feb 23 02:51:56 localhost systemd[1]: tmp-crun.dBGQz2.mount: Deactivated successfully. Feb 23 02:51:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116-userdata-shm.mount: Deactivated successfully. Feb 23 02:51:56 localhost systemd[1]: var-lib-containers-storage-overlay-110642599c4a6478b0a235c15bbf13038f23c44c5a0846f1feed95010fbbddf0-merged.mount: Deactivated successfully. Feb 23 02:51:56 localhost podman[53942]: 2026-02-23 07:51:56.440471496 +0000 UTC m=+0.138861758 container cleanup 848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z) Feb 23 02:51:56 localhost systemd[1]: libpod-conmon-848f5c92ac3943d7d60442405d4f038f45d61ae3bbf169b7569d9ead5597d116.scope: Deactivated successfully. Feb 23 02:51:56 localhost python3[52051]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005626465 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 02:51:56 localhost puppet-user[53805]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.67 seconds Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Feb 23 02:51:56 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Feb 23 02:51:57 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Feb 23 02:51:57 localhost puppet-user[53805]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Feb 23 02:51:57 localhost puppet-user[53805]: Notice: Applied catalog in 0.45 seconds Feb 23 02:51:57 localhost puppet-user[53805]: Application: Feb 23 02:51:57 localhost puppet-user[53805]: Initial environment: production Feb 23 02:51:57 localhost puppet-user[53805]: Converged environment: production Feb 23 02:51:57 localhost puppet-user[53805]: Run mode: user Feb 23 02:51:57 localhost puppet-user[53805]: Changes: Feb 23 02:51:57 localhost puppet-user[53805]: Total: 33 Feb 23 02:51:57 localhost puppet-user[53805]: Events: Feb 23 02:51:57 localhost puppet-user[53805]: Success: 33 Feb 23 02:51:57 localhost puppet-user[53805]: Total: 33 Feb 23 02:51:57 localhost puppet-user[53805]: Resources: Feb 23 02:51:57 localhost puppet-user[53805]: Skipped: 21 Feb 23 02:51:57 localhost puppet-user[53805]: Changed: 33 Feb 23 02:51:57 localhost puppet-user[53805]: Out of sync: 33 Feb 23 02:51:57 localhost puppet-user[53805]: Total: 155 Feb 23 02:51:57 localhost puppet-user[53805]: Time: Feb 23 02:51:57 localhost puppet-user[53805]: Resources: 0.00 Feb 23 02:51:57 localhost puppet-user[53805]: Ovn metadata agent config: 0.01 Feb 23 02:51:57 localhost puppet-user[53805]: Neutron config: 0.38 Feb 23 02:51:57 localhost puppet-user[53805]: Transaction evaluation: 0.44 Feb 23 02:51:57 localhost puppet-user[53805]: Catalog application: 0.45 Feb 23 02:51:57 localhost puppet-user[53805]: Config retrieval: 0.73 Feb 23 02:51:57 localhost puppet-user[53805]: Last run: 1771833117 Feb 23 02:51:57 localhost puppet-user[53805]: Total: 0.45 Feb 23 02:51:57 localhost puppet-user[53805]: Version: Feb 23 02:51:57 localhost puppet-user[53805]: Config: 1771833115 Feb 23 02:51:57 localhost puppet-user[53805]: Puppet: 7.10.0 Feb 23 02:51:57 localhost systemd[1]: libpod-692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f.scope: Deactivated successfully. Feb 23 02:51:57 localhost systemd[1]: libpod-692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f.scope: Consumed 3.592s CPU time. Feb 23 02:51:57 localhost podman[53738]: 2026-02-23 07:51:57.57553225 +0000 UTC m=+3.789345824 container died 692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, container_name=container-puppet-neutron, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, com.redhat.component=openstack-neutron-server-container, name=rhosp-rhel9/openstack-neutron-server, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:57:35Z, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:57:35Z) Feb 23 02:51:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f-userdata-shm.mount: Deactivated successfully. Feb 23 02:51:57 localhost systemd[1]: var-lib-containers-storage-overlay-fcc3bf514cb0d82613183a63c18e99543a6861d40c81ca65529bad1fd0fed9a8-merged.mount: Deactivated successfully. Feb 23 02:51:57 localhost podman[54013]: 2026-02-23 07:51:57.682866337 +0000 UTC m=+0.099571891 container cleanup 692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-server-container, summary=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2026-01-12T22:57:35Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:57:35Z, io.openshift.expose-services=, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, vcs-type=git, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, container_name=container-puppet-neutron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Feb 23 02:51:57 localhost systemd[1]: libpod-conmon-692978a0abdadb6859e3ce5fbec434a4c9958589cb3d844b5f9e67081d3fd34f.scope: Deactivated successfully. Feb 23 02:51:57 localhost python3[52051]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005626465 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005626465', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Feb 23 02:51:58 localhost python3[54066]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:51:59 localhost python3[54098]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:52:00 localhost python3[54148]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:52:00 localhost python3[54191]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833119.6979399-84597-163254435717110/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:52:00 localhost python3[54253]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:52:01 localhost python3[54296]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833120.556672-84597-139683846368806/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:52:01 localhost python3[54358]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:52:02 localhost python3[54401]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833121.4916382-84638-190174352971887/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:52:02 localhost python3[54463]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:52:03 localhost python3[54506]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833122.3550966-84659-137277666961080/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:52:03 localhost python3[54536]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:52:03 localhost systemd[1]: Reloading. Feb 23 02:52:03 localhost systemd-rc-local-generator[54556]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:52:03 localhost systemd-sysv-generator[54560]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:52:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:52:03 localhost systemd[1]: Reloading. Feb 23 02:52:03 localhost systemd-rc-local-generator[54601]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:52:03 localhost systemd-sysv-generator[54605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:52:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:52:04 localhost systemd[1]: Starting TripleO Container Shutdown... Feb 23 02:52:04 localhost systemd[1]: Finished TripleO Container Shutdown. Feb 23 02:52:04 localhost python3[54660]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:52:05 localhost python3[54703]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833124.307304-84965-203156632103773/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:52:05 localhost python3[54765]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:52:05 localhost python3[54808]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833125.2653553-84995-183955050739492/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:52:06 localhost python3[54838]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:52:06 localhost systemd[1]: Reloading. Feb 23 02:52:06 localhost systemd-rc-local-generator[54864]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:52:06 localhost systemd-sysv-generator[54867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:52:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:52:06 localhost systemd[1]: Reloading. Feb 23 02:52:07 localhost systemd-rc-local-generator[54900]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:52:07 localhost systemd-sysv-generator[54906]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:52:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:52:07 localhost systemd[1]: Starting Create netns directory... Feb 23 02:52:07 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 23 02:52:07 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 23 02:52:07 localhost systemd[1]: Finished Create netns directory. Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: b2fe9ad44af593cfea29d5504ea414bc Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: 4767aaabc3de112d8791c290aa2b669d Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: a2261a69f76ac41646722c019ecc270e Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: d8e86b11aed37635c57249fefb951044 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: d8e86b11aed37635c57249fefb951044 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: d8e86b11aed37635c57249fefb951044 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: d8e86b11aed37635c57249fefb951044 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: d8e86b11aed37635c57249fefb951044 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: d8e86b11aed37635c57249fefb951044 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: eeb65e0a12c94af5b1e666d55df1d6ee Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 209b2ea170f45545f80720644a8137d3 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 209b2ea170f45545f80720644a8137d3 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: d8e86b11aed37635c57249fefb951044 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: d8e86b11aed37635c57249fefb951044 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: c586877f5206c4d4c0260095c70d518d Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044 Feb 23 02:52:07 localhost python3[54930]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: d8e86b11aed37635c57249fefb951044 Feb 23 02:52:09 localhost python3[54987]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 23 02:52:09 localhost podman[55029]: 2026-02-23 07:52:09.482248804 +0000 UTC m=+0.064679374 container create 0657aeac1b69de75969f8c422da30171d1c2ce5781eebfe9574639f1d8d50287 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, release=1766032510, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr_init_logs, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, distribution-scope=public, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 02:52:09 localhost systemd[1]: Started libpod-conmon-0657aeac1b69de75969f8c422da30171d1c2ce5781eebfe9574639f1d8d50287.scope. Feb 23 02:52:09 localhost podman[55029]: 2026-02-23 07:52:09.449298899 +0000 UTC m=+0.031729539 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 23 02:52:09 localhost systemd[1]: Started libcrun container. Feb 23 02:52:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13d64e66383f1a291151c41e5e70949d2f13f8c208eba9695a66cb2ad90dee0/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Feb 23 02:52:09 localhost podman[55029]: 2026-02-23 07:52:09.573559391 +0000 UTC m=+0.155989961 container init 0657aeac1b69de75969f8c422da30171d1c2ce5781eebfe9574639f1d8d50287 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr_init_logs, vcs-type=git, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com) Feb 23 02:52:09 localhost podman[55029]: 2026-02-23 07:52:09.5848709 +0000 UTC m=+0.167301480 container start 0657aeac1b69de75969f8c422da30171d1c2ce5781eebfe9574639f1d8d50287 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, batch=17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-type=git, tcib_managed=true) Feb 23 02:52:09 localhost podman[55029]: 2026-02-23 07:52:09.585220011 +0000 UTC m=+0.167650601 container attach 0657aeac1b69de75969f8c422da30171d1c2ce5781eebfe9574639f1d8d50287 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr_init_logs, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vcs-type=git, com.redhat.component=openstack-qdrouterd-container) Feb 23 02:52:09 localhost systemd[1]: libpod-0657aeac1b69de75969f8c422da30171d1c2ce5781eebfe9574639f1d8d50287.scope: Deactivated successfully. Feb 23 02:52:09 localhost podman[55029]: 2026-02-23 07:52:09.590320023 +0000 UTC m=+0.172750653 container died 0657aeac1b69de75969f8c422da30171d1c2ce5781eebfe9574639f1d8d50287 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr_init_logs) Feb 23 02:52:09 localhost podman[55048]: 2026-02-23 07:52:09.665182068 +0000 UTC m=+0.063309920 container cleanup 0657aeac1b69de75969f8c422da30171d1c2ce5781eebfe9574639f1d8d50287 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, build-date=2026-01-12T22:10:14Z, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, release=1766032510, io.openshift.expose-services=, container_name=metrics_qdr_init_logs, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Feb 23 02:52:09 localhost systemd[1]: libpod-conmon-0657aeac1b69de75969f8c422da30171d1c2ce5781eebfe9574639f1d8d50287.scope: Deactivated successfully. Feb 23 02:52:09 localhost python3[54987]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Feb 23 02:52:10 localhost podman[55121]: 2026-02-23 07:52:10.010290748 +0000 UTC m=+0.057569348 container create 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.buildah.version=1.41.5, version=17.1.13, url=https://www.redhat.com, container_name=metrics_qdr, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 02:52:10 localhost systemd[1]: Started libpod-conmon-779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.scope. Feb 23 02:52:10 localhost systemd[1]: Started libcrun container. Feb 23 02:52:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e3b6cf8a686f25c71602c058fe0b6ad924b3e6f22bfe2d699e90cd91e187aeb/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Feb 23 02:52:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e3b6cf8a686f25c71602c058fe0b6ad924b3e6f22bfe2d699e90cd91e187aeb/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Feb 23 02:52:10 localhost podman[55121]: 2026-02-23 07:52:09.982203397 +0000 UTC m=+0.029482007 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 23 02:52:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:52:10 localhost podman[55121]: 2026-02-23 07:52:10.090218565 +0000 UTC m=+0.137497225 container init 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 23 02:52:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:52:10 localhost podman[55121]: 2026-02-23 07:52:10.125749022 +0000 UTC m=+0.173027652 container start 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc.) Feb 23 02:52:10 localhost python3[54987]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b2fe9ad44af593cfea29d5504ea414bc --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 23 02:52:10 localhost podman[55143]: 2026-02-23 07:52:10.216853703 +0000 UTC m=+0.083466530 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, version=17.1.13, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 02:52:10 localhost podman[55143]: 2026-02-23 07:52:10.457788137 +0000 UTC m=+0.324400964 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public) Feb 23 02:52:10 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:52:10 localhost systemd[1]: var-lib-containers-storage-overlay-b13d64e66383f1a291151c41e5e70949d2f13f8c208eba9695a66cb2ad90dee0-merged.mount: Deactivated successfully. Feb 23 02:52:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0657aeac1b69de75969f8c422da30171d1c2ce5781eebfe9574639f1d8d50287-userdata-shm.mount: Deactivated successfully. Feb 23 02:52:10 localhost python3[55216]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:52:10 localhost python3[55232]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:52:11 localhost python3[55293]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833131.005309-85156-167245922380538/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:52:11 localhost python3[55309]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 02:52:11 localhost systemd[1]: Reloading. Feb 23 02:52:11 localhost systemd-rc-local-generator[55334]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:52:12 localhost systemd-sysv-generator[55339]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:52:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:52:12 localhost python3[55361]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:52:12 localhost systemd[1]: Reloading. Feb 23 02:52:12 localhost systemd-sysv-generator[55388]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:52:12 localhost systemd-rc-local-generator[55384]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:52:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:52:13 localhost systemd[1]: Starting metrics_qdr container... Feb 23 02:52:13 localhost systemd[1]: Started metrics_qdr container. Feb 23 02:52:13 localhost python3[55443]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:52:15 localhost python3[55564]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005626465 step=1 update_config_hash_only=False Feb 23 02:52:15 localhost python3[55580]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:52:15 localhost python3[55596]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 23 02:52:25 localhost sshd[55597]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:52:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:52:41 localhost podman[55599]: 2026-02-23 07:52:41.015948656 +0000 UTC m=+0.088471545 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Feb 23 02:52:41 localhost podman[55599]: 2026-02-23 07:52:41.244934979 +0000 UTC m=+0.317457868 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T22:10:14Z, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, managed_by=tripleo_ansible) Feb 23 02:52:41 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:53:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:53:12 localhost systemd[1]: tmp-crun.zGvOJm.mount: Deactivated successfully. Feb 23 02:53:12 localhost podman[55705]: 2026-02-23 07:53:12.020616434 +0000 UTC m=+0.093204378 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, container_name=metrics_qdr, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510) Feb 23 02:53:12 localhost podman[55705]: 2026-02-23 07:53:12.235554144 +0000 UTC m=+0.308142118 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, distribution-scope=public, release=1766032510, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 02:53:12 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:53:16 localhost sshd[55734]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:53:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:53:43 localhost systemd[1]: tmp-crun.AcS81a.mount: Deactivated successfully. Feb 23 02:53:43 localhost podman[55736]: 2026-02-23 07:53:43.014282797 +0000 UTC m=+0.086806862 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=) Feb 23 02:53:43 localhost podman[55736]: 2026-02-23 07:53:43.20006664 +0000 UTC m=+0.272590645 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vendor=Red Hat, Inc., batch=17.1_20260112.1) Feb 23 02:53:43 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:53:52 localhost sshd[55765]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:53:52 localhost sshd[55766]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:54:02 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:d6:2e:8d MACPROTO=0800 SRC=65.49.1.153 DST=38.102.83.142 LEN=40 TOS=0x00 PREC=0x00 TTL=239 ID=54321 PROTO=TCP SPT=52755 DPT=9090 SEQ=2679373449 ACK=0 WINDOW=65535 RES=0x00 SYN URGP=0 Feb 23 02:54:07 localhost sshd[55843]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:54:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:54:14 localhost podman[55845]: 2026-02-23 07:54:14.005891463 +0000 UTC m=+0.081338546 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, version=17.1.13, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public) Feb 23 02:54:14 localhost podman[55845]: 2026-02-23 07:54:14.196843152 +0000 UTC m=+0.272290245 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 02:54:14 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:54:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:54:45 localhost systemd[1]: tmp-crun.qZWwjn.mount: Deactivated successfully. Feb 23 02:54:45 localhost podman[55876]: 2026-02-23 07:54:45.006950597 +0000 UTC m=+0.085016415 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 02:54:45 localhost podman[55876]: 2026-02-23 07:54:45.167698269 +0000 UTC m=+0.245764137 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T22:10:14Z) Feb 23 02:54:45 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:54:57 localhost sshd[55905]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:55:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:55:15 localhost systemd[1]: tmp-crun.WzVje6.mount: Deactivated successfully. Feb 23 02:55:16 localhost podman[55983]: 2026-02-23 07:55:16.000635809 +0000 UTC m=+0.082529278 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 23 02:55:16 localhost podman[55983]: 2026-02-23 07:55:16.180500905 +0000 UTC m=+0.262394344 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr) Feb 23 02:55:16 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:55:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:55:46 localhost systemd[1]: tmp-crun.MEpAp0.mount: Deactivated successfully. Feb 23 02:55:47 localhost podman[56012]: 2026-02-23 07:55:47.003542554 +0000 UTC m=+0.082325631 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git) Feb 23 02:55:47 localhost podman[56012]: 2026-02-23 07:55:47.186730162 +0000 UTC m=+0.265513169 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc.) Feb 23 02:55:47 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:55:47 localhost sshd[56039]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:56:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:56:17 localhost podman[56118]: 2026-02-23 07:56:17.994701532 +0000 UTC m=+0.073948985 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, tcib_managed=true, container_name=metrics_qdr, config_id=tripleo_step1, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 02:56:18 localhost podman[56118]: 2026-02-23 07:56:18.182311056 +0000 UTC m=+0.261558479 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vcs-type=git, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 02:56:18 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:56:40 localhost sshd[56147]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:56:49 localhost systemd[1]: tmp-crun.v5CC1U.mount: Deactivated successfully. Feb 23 02:56:49 localhost podman[56149]: 2026-02-23 07:56:49.236726771 +0000 UTC m=+0.073515491 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.13, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-type=git, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1) Feb 23 02:56:49 localhost podman[56149]: 2026-02-23 07:56:49.465950759 +0000 UTC m=+0.302739519 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.openshift.expose-services=, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 02:56:49 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:57:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:57:19 localhost systemd[1]: tmp-crun.cvJFmc.mount: Deactivated successfully. Feb 23 02:57:20 localhost podman[56254]: 2026-02-23 07:57:20.000913244 +0000 UTC m=+0.076321182 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, version=17.1.13, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 02:57:20 localhost podman[56254]: 2026-02-23 07:57:20.179826559 +0000 UTC m=+0.255234517 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, release=1766032510, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 02:57:20 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:57:21 localhost ceph-osd[32652]: osd.3 pg_epoch: 20 pg[2.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [3,5,1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:22 localhost ceph-osd[32652]: osd.3 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [3,5,1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:24 localhost ceph-osd[32652]: osd.3 pg_epoch: 22 pg[3.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [5,3,1] r=1 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:26 localhost ceph-osd[31709]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [4,0,5] r=1 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 27 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=27 pruub=10.004422188s) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active pruub 1113.520751953s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,1], acting [3,5,1] -> [3,5,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 27 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=27 pruub=10.004422188s) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown pruub 1113.520751953s@ mbc={}] state: transitioning to Primary Feb 23 02:57:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 26 pg[5.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [2,4,3] r=2 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 27 pg[3.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=27 pruub=12.105687141s) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 active pruub 1115.626586914s@ mbc={}] start_peering_interval up [5,3,1] -> [5,3,1], acting [5,3,1] -> [5,3,1], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 27 pg[3.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=27 pruub=12.102222443s) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1115.626586914s@ mbc={}] state: transitioning to Stray Feb 23 02:57:28 localhost sshd[56284]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.19( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.19( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.18( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.17( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.18( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.16( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.17( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.15( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.14( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.14( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.13( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.12( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.15( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.12( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.13( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.11( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.10( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.10( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.11( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.f( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.e( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.f( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.e( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.c( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.d( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.c( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.b( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.d( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.a( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.a( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.b( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.3( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.2( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.1( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.6( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.7( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.16( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.2( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.3( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.4( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.5( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.5( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.4( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.6( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.7( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.9( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.8( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.9( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1a( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.8( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.1b( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.1a( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1b( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.1d( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1c( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.1c( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1d( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.1f( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1e( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1f( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[3.1e( empty local-lis/les=22/23 n=0 ec=27/22 lis/c=22/22 les/c/f=23/23/0 sis=27) [5,3,1] r=1 lpr=27 pi=[22,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.0( empty local-lis/les=27/28 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.19( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.17( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.16( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.13( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.18( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.12( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.15( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.10( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.11( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.e( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.14( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.d( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.c( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.a( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.f( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.b( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.4( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.6( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.3( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.2( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.5( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.8( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1b( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1a( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.9( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1c( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.7( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1f( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1e( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 28 pg[2.1d( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=27) [3,5,1] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:29 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.0 scrub starts Feb 23 02:57:29 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.0 scrub ok Feb 23 02:57:33 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.19 scrub starts Feb 23 02:57:33 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.19 scrub ok Feb 23 02:57:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.16( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,2,1] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.11( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,1,2] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.f( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,5,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.c( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,4,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.3( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,4,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.1c( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,4,2] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924713135s) [5,0,1] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.577758789s@ mbc={}] start_peering_interval up [5,3,1] -> [5,0,1], acting [5,3,1] -> [5,0,1], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924661636s) [5,0,1] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.577758789s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1b( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.929737091s) [5,0,4] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.582885742s@ mbc={}] start_peering_interval up [3,5,1] -> [5,0,4], acting [3,5,1] -> [5,0,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1b( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.929671288s) [5,0,4] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.582885742s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.923300743s) [5,1,0] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.576660156s@ mbc={}] start_peering_interval up [5,3,1] -> [5,1,0], acting [5,3,1] -> [5,1,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.923280716s) [5,1,0] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.576660156s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.919546127s) [0,4,5] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.573242188s@ mbc={}] start_peering_interval up [5,3,1] -> [0,4,5], acting [5,3,1] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.919496536s) [0,4,5] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.573242188s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.2( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.928712845s) [3,1,2] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.582519531s@ mbc={}] start_peering_interval up [3,5,1] -> [3,1,2], acting [3,5,1] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.8( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,2,1] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.2( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.928712845s) [3,1,2] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.582519531s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.7( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.933066368s) [3,2,1] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.586914062s@ mbc={}] start_peering_interval up [3,5,1] -> [3,2,1], acting [3,5,1] -> [3,2,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.7( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.933066368s) [3,2,1] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.586914062s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.3( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.928177834s) [3,4,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.582397461s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,5], acting [3,5,1] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.3( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.928177834s) [3,4,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.582397461s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.918286324s) [3,1,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.572631836s@ mbc={}] start_peering_interval up [5,3,1] -> [3,1,5], acting [5,3,1] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.920247078s) [3,4,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.574584961s@ mbc={}] start_peering_interval up [5,3,1] -> [3,4,5], acting [5,3,1] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.918286324s) [3,1,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.572631836s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.920247078s) [3,4,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.574584961s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.b( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.927489281s) [5,3,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.582031250s@ mbc={}] start_peering_interval up [3,5,1] -> [5,3,4], acting [3,5,1] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.b( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.927466393s) [5,3,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.582031250s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917458534s) [3,2,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.572143555s@ mbc={}] start_peering_interval up [5,3,1] -> [3,2,4], acting [5,3,1] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917458534s) [3,2,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.572143555s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.d( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.926769257s) [5,0,1] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.581542969s@ mbc={}] start_peering_interval up [3,5,1] -> [5,0,1], acting [3,5,1] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.d( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.926731110s) [5,0,1] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.581542969s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917297363s) [0,4,5] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.572021484s@ mbc={}] start_peering_interval up [5,3,1] -> [0,4,5], acting [5,3,1] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917165756s) [0,4,5] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.572021484s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.916308403s) [5,0,4] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.571289062s@ mbc={}] start_peering_interval up [5,3,1] -> [5,0,4], acting [5,3,1] -> [5,0,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.916289330s) [5,0,4] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.571289062s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917590141s) [0,5,4] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.572631836s@ mbc={}] start_peering_interval up [5,3,1] -> [0,5,4], acting [5,3,1] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.916083336s) [3,5,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.571289062s@ mbc={}] start_peering_interval up [5,3,1] -> [3,5,4], acting [5,3,1] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.915623665s) [3,4,2] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.570800781s@ mbc={}] start_peering_interval up [5,3,1] -> [3,4,2], acting [5,3,1] -> [3,4,2], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917514801s) [0,5,4] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.572631836s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.916083336s) [3,5,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.571289062s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.915623665s) [3,4,2] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.570800781s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.11( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.925813675s) [0,1,2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.581176758s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.12( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.925561905s) [5,1,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.580932617s@ mbc={}] start_peering_interval up [3,5,1] -> [5,1,3], acting [3,5,1] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.14( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.925952911s) [3,2,1] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.581298828s@ mbc={}] start_peering_interval up [3,5,1] -> [3,2,1], acting [3,5,1] -> [3,2,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.914832115s) [3,2,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.570312500s@ mbc={}] start_peering_interval up [5,3,1] -> [3,2,4], acting [5,3,1] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.11( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.925767899s) [0,1,2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.581176758s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.14( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.925952911s) [3,2,1] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.581298828s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.12( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.925525665s) [5,1,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.580932617s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.914832115s) [3,2,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.570312500s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.16( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924961090s) [0,2,1] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.580688477s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,1], acting [3,5,1] -> [0,2,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.16( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924926758s) [0,2,1] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.580688477s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.15( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.925275803s) [5,1,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.580932617s@ mbc={}] start_peering_interval up [3,5,1] -> [5,1,3], acting [3,5,1] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.15( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.925223351s) [5,1,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.580932617s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.17( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924767494s) [3,5,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.580566406s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,4], acting [3,5,1] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917411804s) [3,1,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.573364258s@ mbc={}] start_peering_interval up [5,3,1] -> [3,1,5], acting [5,3,1] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.19( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.921234131s) [4,0,2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.577148438s@ mbc={}] start_peering_interval up [3,5,1] -> [4,0,2], acting [3,5,1] -> [4,0,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.17( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924767494s) [3,5,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.580566406s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.18( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924818039s) [5,3,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.580810547s@ mbc={}] start_peering_interval up [3,5,1] -> [5,3,4], acting [3,5,1] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917411804s) [3,1,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.573364258s@ mbc={}] state: transitioning to Primary Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.19( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.921191216s) [4,0,2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.577148438s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.18( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924781799s) [5,3,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.580810547s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.914155960s) [4,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.570312500s@ mbc={}] start_peering_interval up [5,3,1] -> [4,2,3], acting [5,3,1] -> [4,2,3], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.914131165s) [4,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.570312500s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.19( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.913958549s) [1,0,2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.570190430s@ mbc={}] start_peering_interval up [5,3,1] -> [1,0,2], acting [5,3,1] -> [1,0,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.19( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.913926125s) [1,0,2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.570190430s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.914136887s) [2,3,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.570434570s@ mbc={}] start_peering_interval up [5,3,1] -> [2,3,4], acting [5,3,1] -> [2,3,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.914100647s) [2,3,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.570434570s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.13( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924345970s) [2,0,4] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.580688477s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,4], acting [3,5,1] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.13( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924324036s) [2,0,4] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.580688477s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.914756775s) [1,3,5] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.571289062s@ mbc={}] start_peering_interval up [5,3,1] -> [1,3,5], acting [5,3,1] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.914725304s) [1,3,5] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.571289062s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.10( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924576759s) [2,1,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.581054688s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.10( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924538612s) [2,1,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.581054688s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.914921761s) [2,3,1] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.571655273s@ mbc={}] start_peering_interval up [5,3,1] -> [2,3,1], acting [5,3,1] -> [2,3,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.f( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.925112724s) [2,0,4] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.581665039s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,4], acting [3,5,1] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.914889336s) [2,3,1] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.571655273s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.f( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.925021172s) [2,0,4] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.581665039s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.e( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924571037s) [4,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.581298828s@ mbc={}] start_peering_interval up [3,5,1] -> [4,2,3], acting [3,5,1] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.e( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924536705s) [4,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.581298828s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.c( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924591064s) [2,1,0] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.581542969s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,0], acting [3,5,1] -> [2,1,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.913134575s) [1,3,5] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.570190430s@ mbc={}] start_peering_interval up [5,3,1] -> [1,3,5], acting [5,3,1] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.a( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924489975s) [2,4,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.581665039s@ mbc={}] start_peering_interval up [3,5,1] -> [2,4,3], acting [3,5,1] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.913082123s) [1,3,5] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.570190430s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.c( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924377441s) [2,1,0] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.581542969s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.a( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924435616s) [2,4,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.581665039s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.b( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.915564537s) [4,0,5] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.572875977s@ mbc={}] start_peering_interval up [5,3,1] -> [4,0,5], acting [5,3,1] -> [4,0,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.b( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.915468216s) [4,0,5] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.572875977s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.2( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.915370941s) [4,5,0] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.573120117s@ mbc={}] start_peering_interval up [5,3,1] -> [4,5,0], acting [5,3,1] -> [4,5,0], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.2( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.915327072s) [4,5,0] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.573120117s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.915454865s) [4,3,2] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.573242188s@ mbc={}] start_peering_interval up [5,3,1] -> [4,3,2], acting [5,3,1] -> [4,3,2], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.915406227s) [4,3,2] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.573242188s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924228668s) [4,5,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.582153320s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.5( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924571991s) [2,4,0] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.582519531s@ mbc={}] start_peering_interval up [3,5,1] -> [2,4,0], acting [3,5,1] -> [2,4,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.5( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924550056s) [2,4,0] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.582519531s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924188614s) [4,5,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.582153320s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.915220261s) [4,0,5] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.573242188s@ mbc={}] start_peering_interval up [5,3,1] -> [4,0,5], acting [5,3,1] -> [4,0,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.4( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924048424s) [1,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.582153320s@ mbc={}] start_peering_interval up [3,5,1] -> [1,2,3], acting [3,5,1] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.915149689s) [4,0,5] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.573242188s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.4( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.923961639s) [1,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.582153320s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.4( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917167664s) [1,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.575317383s@ mbc={}] start_peering_interval up [5,3,1] -> [1,2,3], acting [5,3,1] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.4( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917145729s) [1,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.575317383s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.6( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.923826218s) [1,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.582153320s@ mbc={}] start_peering_interval up [3,5,1] -> [1,2,3], acting [3,5,1] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.916831017s) [4,5,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.575195312s@ mbc={}] start_peering_interval up [5,3,1] -> [4,5,3], acting [5,3,1] -> [4,5,3], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.6( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.923783302s) [1,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.582153320s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.916786194s) [4,5,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.575195312s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.8( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924258232s) [0,2,1] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.582763672s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,1], acting [3,5,1] -> [0,2,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.9( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924408913s) [4,0,5] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.582885742s@ mbc={}] start_peering_interval up [3,5,1] -> [4,0,5], acting [3,5,1] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917653084s) [2,1,0] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.576171875s@ mbc={}] start_peering_interval up [5,3,1] -> [2,1,0], acting [5,3,1] -> [2,1,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917633057s) [2,1,0] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.576171875s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917386055s) [5,0,1] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.575927734s@ mbc={}] start_peering_interval up [5,3,1] -> [5,0,1], acting [5,3,1] -> [5,0,1], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.9( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924045563s) [4,0,5] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.582885742s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917039871s) [5,0,1] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.575927734s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.8( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.924195290s) [0,2,1] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.582763672s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1d( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.927953720s) [2,3,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.587158203s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,4], acting [3,5,1] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1d( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.927920341s) [2,3,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.587158203s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917822838s) [0,4,2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.577148438s@ mbc={}] start_peering_interval up [5,3,1] -> [0,4,2], acting [5,3,1] -> [0,4,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917993546s) [1,3,5] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.577270508s@ mbc={}] start_peering_interval up [5,3,1] -> [1,3,5], acting [5,3,1] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917782784s) [1,3,5] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.577270508s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1e( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.927807808s) [4,5,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.587158203s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.917753220s) [0,4,2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.577148438s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.918269157s) [4,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.577758789s@ mbc={}] start_peering_interval up [5,3,1] -> [4,2,3], acting [5,3,1] -> [4,2,3], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.918248177s) [4,2,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.577758789s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1f( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.927553177s) [4,3,2] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.587158203s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1f( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.927523613s) [4,3,2] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.587158203s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1c( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.927299500s) [2,3,1] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.586914062s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1e( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.927578926s) [4,5,3] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.587158203s@ mbc={}] state: transitioning to Stray Feb 23 02:57:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 31 pg[2.1c( empty local-lis/les=27/28 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.927240372s) [2,3,1] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.586914062s@ mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.13( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [2,0,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.8( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [2,1,0] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.c( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [2,1,0] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.f( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [2,0,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.5( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [2,4,0] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [5,0,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.d( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [5,0,1] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.1b( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [5,0,1] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [5,1,0] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.b( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [4,0,5] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.1b( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [5,0,4] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.6( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [4,0,5] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [5,0,1] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.19( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [4,0,2] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.2( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [4,5,0] r=2 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[2.9( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [4,0,5] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 31 pg[3.19( empty local-lis/les=0/0 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [1,0,2] r=1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 32 pg[2.8( empty local-lis/les=31/32 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,2,1] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 32 pg[3.3( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,4,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 32 pg[3.1c( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,4,2] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 32 pg[3.c( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,4,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 32 pg[3.f( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,5,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[3.a( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,1,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 32 pg[2.11( empty local-lis/les=31/32 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,1,2] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 32 pg[2.16( empty local-lis/les=31/32 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [0,2,1] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[3.16( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,1,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[3.5( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,4,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[2.3( empty local-lis/les=31/32 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,4,5] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[3.10( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,5,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[2.17( empty local-lis/les=31/32 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,5,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[2.2( empty local-lis/les=31/32 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,1,2] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[3.d( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,2,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[3.14( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,2,4] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[3.13( empty local-lis/les=31/32 n=0 ec=27/22 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,4,2] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[2.14( empty local-lis/les=31/32 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,2,1] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 32 pg[2.7( empty local-lis/les=31/32 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=31) [3,2,1] r=0 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:36 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 3.f scrub starts Feb 23 02:57:43 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.1a scrub starts Feb 23 02:57:43 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.1a scrub ok Feb 23 02:57:44 localhost ceph-osd[31709]: osd.0 pg_epoch: 34 pg[7.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0,5,4] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:57:44 localhost ceph-osd[31709]: osd.0 pg_epoch: 33 pg[6.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [4,0,2] r=1 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:57:45 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.17 scrub starts Feb 23 02:57:45 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.17 scrub ok Feb 23 02:57:45 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 3.3 scrub starts Feb 23 02:57:45 localhost ceph-osd[31709]: osd.0 pg_epoch: 35 pg[7.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0,5,4] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:57:46 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.14 scrub starts Feb 23 02:57:46 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.14 scrub ok Feb 23 02:57:48 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 3.c scrub starts Feb 23 02:57:48 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 3.c scrub ok Feb 23 02:57:50 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 2.16 scrub starts Feb 23 02:57:50 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 2.16 scrub ok Feb 23 02:57:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:57:51 localhost podman[56303]: 2026-02-23 07:57:51.005295603 +0000 UTC m=+0.078051725 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, architecture=x86_64, release=1766032510, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_id=tripleo_step1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 02:57:51 localhost podman[56303]: 2026-02-23 07:57:51.200775451 +0000 UTC m=+0.273531503 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, vcs-type=git, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, version=17.1.13, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 02:57:51 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:57:58 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 2.11 scrub starts Feb 23 02:57:58 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.14 scrub starts Feb 23 02:57:58 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 2.11 scrub ok Feb 23 02:58:00 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.16 scrub starts Feb 23 02:58:00 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.16 scrub ok Feb 23 02:58:02 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts Feb 23 02:58:02 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok Feb 23 02:58:02 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts Feb 23 02:58:02 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok Feb 23 02:58:03 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 3.1c scrub starts Feb 23 02:58:03 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 3.1c scrub ok Feb 23 02:58:04 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.10 scrub starts Feb 23 02:58:04 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.10 scrub ok Feb 23 02:58:06 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 3.f scrub starts Feb 23 02:58:06 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 3.f scrub ok Feb 23 02:58:06 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.d scrub starts Feb 23 02:58:06 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.d scrub ok Feb 23 02:58:16 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.3 scrub starts Feb 23 02:58:16 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.3 scrub ok Feb 23 02:58:17 localhost sshd[56363]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:58:18 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.2 scrub starts Feb 23 02:58:18 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 3.3 scrub starts Feb 23 02:58:18 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.2 scrub ok Feb 23 02:58:18 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 3.3 scrub ok Feb 23 02:58:19 localhost python3[56380]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:58:20 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.5 scrub starts Feb 23 02:58:20 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.5 scrub ok Feb 23 02:58:21 localhost python3[56396]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:58:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:58:22 localhost systemd[1]: tmp-crun.RWx4C2.mount: Deactivated successfully. Feb 23 02:58:22 localhost podman[56397]: 2026-02-23 07:58:22.003397635 +0000 UTC m=+0.081649797 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T22:10:14Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container) Feb 23 02:58:22 localhost podman[56397]: 2026-02-23 07:58:22.202483494 +0000 UTC m=+0.280735626 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 02:58:22 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:58:23 localhost python3[56441]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:58:26 localhost python3[56489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:58:26 localhost python3[56532]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833505.7896569-92678-240208838104732/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=bb97f2335ebfccbfb2bd8d50bbb589ce7e034c5d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:58:27 localhost ceph-osd[31709]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=10.258841515s) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active pruub 1177.578125000s@ mbc={}] start_peering_interval up [4,0,5] -> [4,0,5], acting [4,0,5] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:27 localhost ceph-osd[31709]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=10.256491661s) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1177.578125000s@ mbc={}] state: transitioning to Stray Feb 23 02:58:27 localhost ceph-osd[32652]: osd.3 pg_epoch: 39 pg[5.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=39 pruub=12.547172546s) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 active pruub 1175.786987305s@ mbc={}] start_peering_interval up [2,4,3] -> [2,4,3], acting [2,4,3] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:27 localhost ceph-osd[32652]: osd.3 pg_epoch: 39 pg[5.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=39 pruub=12.544041634s) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1175.786987305s@ mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [4,0,5] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.18( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.1a( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.19( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.1b( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.1d( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.f( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.1( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.2( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.3( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.5( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.7( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.6( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.4( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.c( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.b( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.9( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.8( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.16( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.14( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.17( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.13( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.12( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.1f( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.15( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.11( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.10( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.1e( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.a( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.e( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.1c( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 40 pg[5.d( empty local-lis/les=26/27 n=0 ec=39/26 lis/c=26/26 les/c/f=27/27/0 sis=39) [2,4,3] r=2 lpr=39 pi=[26,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:29 localhost ceph-osd[31709]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.741350174s) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1180.102416992s@ mbc={}] start_peering_interval up [4,0,2] -> [4,0,2], acting [4,0,2] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:29 localhost ceph-osd[31709]: osd.0 pg_epoch: 41 pg[7.0( v 36'39 (0'0,36'39] local-lis/les=34/35 n=22 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=41 pruub=11.574381828s) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 36'38 mlcod 36'38 active pruub 1180.936157227s@ mbc={}] start_peering_interval up [0,5,4] -> [0,5,4], acting [0,5,4] -> [0,5,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:29 localhost ceph-osd[31709]: osd.0 pg_epoch: 41 pg[7.0( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=1 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=41 pruub=11.574381828s) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 36'38 mlcod 0'0 unknown pruub 1180.936157227s@ mbc={}] state: transitioning to Primary Feb 23 02:58:29 localhost ceph-osd[31709]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.738380432s) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.102416992s@ mbc={}] state: transitioning to Stray Feb 23 02:58:30 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.6( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:30 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:30 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.1( v 36'39 (0'0,36'39] local-lis/les=34/35 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.2( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.f( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.a( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.9( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.c( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.d( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.8( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.7( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.4( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.e( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.5( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=34/35 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=33/34 n=0 ec=41/33 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.0( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 36'38 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.8( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.1( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.2( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.a( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.6( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.4( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[31709]: osd.0 pg_epoch: 42 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=34/34 les/c/f=35/35/0 sis=41) [0,5,4] r=0 lpr=41 pi=[34,41)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:31 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.7 scrub starts Feb 23 02:58:31 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.7 scrub ok Feb 23 02:58:31 localhost python3[56594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:58:31 localhost python3[56637]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833511.0469344-92678-155994558418252/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=04bfb06bbb9d2445e353d8ca8467b47fb8316e81 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:58:32 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 7.0 scrub starts Feb 23 02:58:32 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 7.0 scrub ok Feb 23 02:58:33 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.a scrub starts Feb 23 02:58:33 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 3.a scrub ok Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.1( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.838010788s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.549194336s@ mbc={}] start_peering_interval up [0,5,4] -> [0,2,4], acting [0,5,4] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.1( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.838010788s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown pruub 1186.549194336s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.648598671s) [2,3,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.359741211s@ mbc={}] start_peering_interval up [4,0,5] -> [2,3,4], acting [4,0,5] -> [2,3,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.648550987s) [2,3,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.359741211s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846628189s) [5,1,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.557983398s@ mbc={}] start_peering_interval up [4,0,2] -> [5,1,3], acting [4,0,2] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846557617s) [5,1,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.557983398s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.659172058s) [1,0,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.370605469s@ mbc={}] start_peering_interval up [4,0,5] -> [1,0,5], acting [4,0,5] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.659118652s) [1,0,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.370605469s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846977234s) [4,5,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558471680s@ mbc={}] start_peering_interval up [4,0,2] -> [4,5,3], acting [4,0,2] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846440315s) [3,1,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558105469s@ mbc={}] start_peering_interval up [4,0,2] -> [3,1,2], acting [4,0,2] -> [3,1,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.658990860s) [2,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.370605469s@ mbc={}] start_peering_interval up [4,0,5] -> [2,3,1], acting [4,0,5] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846421242s) [3,1,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558105469s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846925735s) [4,5,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558471680s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.658959389s) [2,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.370605469s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.651615143s) [3,5,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.363403320s@ mbc={}] start_peering_interval up [4,0,5] -> [3,5,1], acting [4,0,5] -> [3,5,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.651562691s) [3,5,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.363403320s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.18( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.847553253s) [4,3,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559570312s@ mbc={}] start_peering_interval up [4,0,2] -> [4,3,2], acting [4,0,2] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647165298s) [3,4,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.359130859s@ mbc={}] start_peering_interval up [4,0,5] -> [3,4,5], acting [4,0,5] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.18( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.847507477s) [4,3,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.559570312s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647115707s) [3,4,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.359130859s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.847575188s) [0,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559692383s@ mbc={}] start_peering_interval up [4,0,2] -> [0,5,1], acting [4,0,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647706032s) [2,0,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.359863281s@ mbc={}] start_peering_interval up [4,0,5] -> [2,0,1], acting [4,0,5] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.838550568s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.550903320s@ mbc={}] start_peering_interval up [0,5,4] -> [0,2,4], acting [0,5,4] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846534729s) [3,1,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558959961s@ mbc={}] start_peering_interval up [4,0,2] -> [3,1,2], acting [4,0,2] -> [3,1,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.847575188s) [0,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.559692383s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846499443s) [3,1,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558959961s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.838550568s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown pruub 1186.550903320s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647419930s) [2,0,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.359863281s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.847258568s) [4,5,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559814453s@ mbc={}] start_peering_interval up [4,0,2] -> [4,5,0], acting [4,0,2] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.847170830s) [4,5,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.559814453s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.651185036s) [0,4,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.364013672s@ mbc={}] start_peering_interval up [4,0,5] -> [0,4,2], acting [4,0,5] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.646513939s) [2,3,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.359252930s@ mbc={}] start_peering_interval up [4,0,5] -> [2,3,4], acting [4,0,5] -> [2,3,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.847219467s) [5,0,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.560058594s@ mbc={}] start_peering_interval up [4,0,2] -> [5,0,4], acting [4,0,2] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.651185036s) [0,4,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1184.364013672s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.847189903s) [5,0,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.560058594s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.646431923s) [2,3,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.359252930s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846647263s) [4,3,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559692383s@ mbc={}] start_peering_interval up [4,0,2] -> [4,3,5], acting [4,0,2] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846619606s) [4,3,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.559692383s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.650881767s) [2,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.364013672s@ mbc={}] start_peering_interval up [4,0,5] -> [2,1,3], acting [4,0,5] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.653088570s) [2,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.366210938s@ mbc={}] start_peering_interval up [4,0,5] -> [2,3,1], acting [4,0,5] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.650789261s) [2,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.364013672s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.653050423s) [2,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.366210938s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.845644951s) [1,3,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558959961s@ mbc={}] start_peering_interval up [4,0,2] -> [1,3,2], acting [4,0,2] -> [1,3,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846367836s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559570312s@ mbc={}] start_peering_interval up [4,0,2] -> [2,0,4], acting [4,0,2] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846307755s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.559570312s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.652761459s) [5,1,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.366210938s@ mbc={}] start_peering_interval up [4,0,5] -> [5,1,0], acting [4,0,5] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.836463928s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.550048828s@ mbc={}] start_peering_interval up [0,5,4] -> [0,2,4], acting [0,5,4] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.652709961s) [5,1,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.366210938s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.836463928s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown pruub 1186.550048828s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.845927238s) [3,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559448242s@ mbc={}] start_peering_interval up [4,0,2] -> [3,2,4], acting [4,0,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.845860481s) [3,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.559448242s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.845618248s) [1,3,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558959961s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846031189s) [0,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559814453s@ mbc={}] start_peering_interval up [4,0,2] -> [0,4,2], acting [4,0,2] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.652588844s) [0,1,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.366333008s@ mbc={}] start_peering_interval up [4,0,5] -> [0,1,2], acting [4,0,5] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.652588844s) [0,1,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1184.366333008s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.846031189s) [0,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.559814453s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.845048904s) [5,4,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558959961s@ mbc={}] start_peering_interval up [4,0,2] -> [5,4,0], acting [4,0,2] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.645887375s) [1,2,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.359741211s@ mbc={}] start_peering_interval up [4,0,5] -> [1,2,3], acting [4,0,5] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.844981194s) [5,4,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558959961s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.645842552s) [1,2,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.359741211s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.835710526s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.549804688s@ mbc={}] start_peering_interval up [0,5,4] -> [0,2,4], acting [0,5,4] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.844686508s) [1,0,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558715820s@ mbc={}] start_peering_interval up [4,0,2] -> [1,0,5], acting [4,0,2] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.651823997s) [5,4,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.366088867s@ mbc={}] start_peering_interval up [4,0,5] -> [5,4,0], acting [4,0,5] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.645532608s) [3,5,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.359741211s@ mbc={}] start_peering_interval up [4,0,5] -> [3,5,1], acting [4,0,5] -> [3,5,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.834731102s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.548950195s@ mbc={}] start_peering_interval up [0,5,4] -> [0,2,4], acting [0,5,4] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.651780128s) [5,4,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.366088867s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.645501137s) [3,5,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.359741211s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.834731102s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown pruub 1186.548950195s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.845410347s) [3,4,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559814453s@ mbc={}] start_peering_interval up [4,0,2] -> [3,4,2], acting [4,0,2] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.844346046s) [5,0,1] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558715820s@ mbc={}] start_peering_interval up [4,0,2] -> [5,0,1], acting [4,0,2] -> [5,0,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.844398499s) [1,0,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558715820s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.844315529s) [5,0,1] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558715820s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.845323563s) [3,4,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.559814453s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.844021797s) [3,5,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558715820s@ mbc={}] start_peering_interval up [4,0,2] -> [3,5,1], acting [4,0,2] -> [3,5,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.649027824s) [1,3,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.363647461s@ mbc={}] start_peering_interval up [4,0,5] -> [1,3,5], acting [4,0,5] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.843991280s) [3,5,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558715820s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.834438324s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.549316406s@ mbc={}] start_peering_interval up [0,5,4] -> [0,2,4], acting [0,5,4] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.843797684s) [0,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558715820s@ mbc={}] start_peering_interval up [4,0,2] -> [0,4,2], acting [4,0,2] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.834438324s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown pruub 1186.549316406s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.648998260s) [1,3,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.363647461s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.843797684s) [0,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.558715820s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.835710526s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown pruub 1186.549804688s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.651525497s) [5,3,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.366821289s@ mbc={}] start_peering_interval up [4,0,5] -> [5,3,4], acting [4,0,5] -> [5,3,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.651486397s) [5,3,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.366821289s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.9( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.843883514s) [4,2,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559448242s@ mbc={}] start_peering_interval up [4,0,2] -> [4,2,0], acting [4,0,2] -> [4,2,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.9( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.843838692s) [4,2,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.559448242s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.650512695s) [1,0,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.366088867s@ mbc={}] start_peering_interval up [4,0,5] -> [1,0,5], acting [4,0,5] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.650478363s) [1,0,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.366088867s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.844038963s) [0,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559692383s@ mbc={}] start_peering_interval up [4,0,2] -> [0,1,2], acting [4,0,2] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.844038963s) [0,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.559692383s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.650250435s) [1,2,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.366088867s@ mbc={}] start_peering_interval up [4,0,5] -> [1,2,3], acting [4,0,5] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.833057404s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.548950195s@ mbc={}] start_peering_interval up [0,5,4] -> [0,2,4], acting [0,5,4] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.650209427s) [1,2,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.366088867s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.833057404s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown pruub 1186.548950195s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.654646873s) [3,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.370483398s@ mbc={}] start_peering_interval up [4,0,5] -> [3,1,5], acting [4,0,5] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.654610634s) [3,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.370483398s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.842796326s) [1,3,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558715820s@ mbc={}] start_peering_interval up [4,0,2] -> [1,3,5], acting [4,0,2] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.842743874s) [1,3,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558715820s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.835083008s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.551147461s@ mbc={}] start_peering_interval up [0,5,4] -> [0,2,4], acting [0,5,4] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647255898s) [1,5,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.363281250s@ mbc={}] start_peering_interval up [4,0,5] -> [1,5,3], acting [4,0,5] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.843581200s) [4,0,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559692383s@ mbc={}] start_peering_interval up [4,0,2] -> [4,0,5], acting [4,0,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.843551636s) [4,0,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.559692383s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647197723s) [1,5,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.363281250s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.835083008s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown pruub 1186.551147461s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647040367s) [5,1,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.363281250s@ mbc={}] start_peering_interval up [4,0,5] -> [5,1,0], acting [4,0,5] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.841534615s) [0,2,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.557983398s@ mbc={}] start_peering_interval up [4,0,2] -> [0,2,1], acting [4,0,2] -> [0,2,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.646988869s) [5,1,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.363281250s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.841534615s) [0,2,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.557983398s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647053719s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.363525391s@ mbc={}] start_peering_interval up [4,0,5] -> [3,2,1], acting [4,0,5] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647051811s) [1,5,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.363403320s@ mbc={}] start_peering_interval up [4,0,5] -> [1,5,3], acting [4,0,5] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.842098236s) [4,5,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558593750s@ mbc={}] start_peering_interval up [4,0,2] -> [4,5,0], acting [4,0,2] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647017479s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.363525391s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.646895409s) [1,5,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.363403320s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.842041969s) [4,5,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558593750s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.841650009s) [1,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558227539s@ mbc={}] start_peering_interval up [4,0,2] -> [1,2,3], acting [4,0,2] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.649505615s) [4,0,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.366210938s@ mbc={}] start_peering_interval up [4,0,5] -> [4,0,2], acting [4,0,5] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.649475098s) [4,0,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.366210938s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.841603279s) [1,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558227539s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.653763771s) [1,5,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.370483398s@ mbc={}] start_peering_interval up [4,0,5] -> [1,5,3], acting [4,0,5] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.10( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.842826843s) [4,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.559570312s@ mbc={}] start_peering_interval up [4,0,2] -> [4,2,3], acting [4,0,2] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.10( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.842771530s) [4,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.559570312s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.841308594s) [4,5,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558227539s@ mbc={}] start_peering_interval up [4,0,2] -> [4,5,3], acting [4,0,2] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.653726578s) [1,5,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.370483398s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.841280937s) [4,5,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558227539s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.653594971s) [5,1,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.370605469s@ mbc={}] start_peering_interval up [4,0,5] -> [5,1,0], acting [4,0,5] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.648882866s) [0,2,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.365966797s@ mbc={}] start_peering_interval up [4,0,5] -> [0,2,1], acting [4,0,5] -> [0,2,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.840800285s) [3,2,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.557861328s@ mbc={}] start_peering_interval up [4,0,2] -> [3,2,1], acting [4,0,2] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.653569221s) [5,1,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.370605469s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.648882866s) [0,2,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1184.365966797s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.840762138s) [3,2,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.557861328s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.16( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.840879440s) [4,0,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558105469s@ mbc={}] start_peering_interval up [4,0,2] -> [4,0,5], acting [4,0,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.649009705s) [3,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.366333008s@ mbc={}] start_peering_interval up [4,0,5] -> [3,1,5], acting [4,0,5] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.16( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.840841293s) [4,0,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558105469s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.648981094s) [3,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.366333008s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.841220856s) [5,3,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.558471680s@ mbc={}] start_peering_interval up [4,0,2] -> [5,3,1], acting [4,0,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=12.841188431s) [5,3,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.558471680s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.649457932s) [2,1,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1184.366821289s@ mbc={}] start_peering_interval up [4,0,5] -> [2,1,0], acting [4,0,5] -> [2,1,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.649378777s) [2,1,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.366821289s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,4,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.7( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.d( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,2,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.8( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.15( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.649989128s) [1,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.251220703s@ mbc={}] start_peering_interval up [2,4,3] -> [1,2,3], acting [2,4,3] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.649897575s) [1,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.251220703s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647703171s) [1,3,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.249267578s@ mbc={}] start_peering_interval up [2,4,3] -> [1,3,2], acting [2,4,3] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647645950s) [1,3,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.249267578s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.646782875s) [3,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.249023438s@ mbc={}] start_peering_interval up [2,4,3] -> [3,5,1], acting [2,4,3] -> [3,5,1], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.646782875s) [3,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1180.249023438s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.1a( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,2,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.647071838s) [0,2,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.250854492s@ mbc={}] start_peering_interval up [2,4,3] -> [0,2,4], acting [2,4,3] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.646944046s) [0,2,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.250854492s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.645194054s) [0,5,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.249389648s@ mbc={}] start_peering_interval up [2,4,3] -> [0,5,1], acting [2,4,3] -> [0,5,1], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.645139694s) [0,5,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.249389648s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.646492958s) [3,2,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.250976562s@ mbc={}] start_peering_interval up [2,4,3] -> [3,2,4], acting [2,4,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.644516945s) [5,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.249145508s@ mbc={}] start_peering_interval up [2,4,3] -> [5,3,4], acting [2,4,3] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.644471169s) [5,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.249145508s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.643373489s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.248168945s@ mbc={}] start_peering_interval up [2,4,3] -> [5,1,3], acting [2,4,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.644457817s) [3,4,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.249511719s@ mbc={}] start_peering_interval up [2,4,3] -> [3,4,5], acting [2,4,3] -> [3,4,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.643321037s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.248168945s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.642961502s) [4,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.248046875s@ mbc={}] start_peering_interval up [2,4,3] -> [4,2,0], acting [2,4,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.642922401s) [4,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.248046875s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.646492958s) [3,2,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1180.250976562s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.643013954s) [3,1,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.248291016s@ mbc={}] start_peering_interval up [2,4,3] -> [3,1,2], acting [2,4,3] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.642915726s) [4,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.248168945s@ mbc={}] start_peering_interval up [2,4,3] -> [4,5,0], acting [2,4,3] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.643013954s) [3,1,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1180.248291016s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.1c( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,2,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.642874718s) [4,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.248168945s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.644457817s) [3,4,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1180.249511719s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.642201424s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247802734s@ mbc={}] start_peering_interval up [2,4,3] -> [5,1,3], acting [2,4,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.642156601s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.247802734s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.644207954s) [1,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.250000000s@ mbc={}] start_peering_interval up [2,4,3] -> [1,0,2], acting [2,4,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.642060280s) [0,5,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247924805s@ mbc={}] start_peering_interval up [2,4,3] -> [0,5,1], acting [2,4,3] -> [0,5,1], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.644152641s) [1,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.250000000s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.641631126s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247680664s@ mbc={}] start_peering_interval up [2,4,3] -> [5,1,3], acting [2,4,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.642027855s) [0,5,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.247924805s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.641597748s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.247680664s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.10( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.641022682s) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247314453s@ mbc={}] start_peering_interval up [2,4,3] -> [3,1,5], acting [2,4,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.641022682s) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1180.247314453s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.640755653s) [4,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247314453s@ mbc={}] start_peering_interval up [2,4,3] -> [4,3,5], acting [2,4,3] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.640701294s) [4,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.247314453s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.644686699s) [2,0,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.251342773s@ mbc={}] start_peering_interval up [2,4,3] -> [2,0,4], acting [2,4,3] -> [2,0,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.644645691s) [2,0,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.251342773s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.640695572s) [0,2,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247558594s@ mbc={}] start_peering_interval up [2,4,3] -> [0,2,4], acting [2,4,3] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.640517235s) [0,1,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247314453s@ mbc={}] start_peering_interval up [2,4,3] -> [0,1,2], acting [2,4,3] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.640666962s) [0,2,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.247558594s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.640383720s) [0,1,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.247314453s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.640479088s) [4,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247558594s@ mbc={}] start_peering_interval up [2,4,3] -> [4,0,2], acting [2,4,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.639785767s) [1,0,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.246826172s@ mbc={}] start_peering_interval up [2,4,3] -> [1,0,5], acting [2,4,3] -> [1,0,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.640354156s) [4,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.247558594s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.641736031s) [4,3,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.249023438s@ mbc={}] start_peering_interval up [2,4,3] -> [4,3,2], acting [2,4,3] -> [4,3,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.639745712s) [1,0,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.246826172s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.641702652s) [4,3,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.249023438s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.640276909s) [3,4,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247436523s@ mbc={}] start_peering_interval up [2,4,3] -> [3,4,5], acting [2,4,3] -> [3,4,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,2,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.639653206s) [2,0,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247070312s@ mbc={}] start_peering_interval up [2,4,3] -> [2,0,4], acting [2,4,3] -> [2,0,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.640276909s) [3,4,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1180.247436523s@ mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.639316559s) [0,4,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.246704102s@ mbc={}] start_peering_interval up [2,4,3] -> [0,4,2], acting [2,4,3] -> [0,4,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.639865875s) [1,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.247802734s@ mbc={}] start_peering_interval up [2,4,3] -> [1,3,5], acting [2,4,3] -> [1,3,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.639208794s) [0,4,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.246704102s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.639819145s) [1,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.247802734s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.639628410s) [2,0,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.247070312s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,1,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.633072853s) [0,2,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1180.241333008s@ mbc={}] start_peering_interval up [2,4,3] -> [0,2,4], acting [2,4,3] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:34 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=10.633018494s) [0,2,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.241333008s@ mbc={}] state: transitioning to Stray Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.1b( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,4,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:34 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,2,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,3,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,0,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.7( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,5,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.b( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,3,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.17( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,0,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.12( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,5,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.10( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.13( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,2,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,5,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.18( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,3,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.1d( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,5,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.14( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,3,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.1c( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,1,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.11( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,5,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.17( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,5,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.1b( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,3,1] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[6.10( empty local-lis/les=0/0 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.6( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,0,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,2,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.1d( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 43 pg[4.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,0,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,4,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 43 pg[5.d( empty local-lis/les=0/0 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,0,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,4,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,4,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[6.15( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[6.7( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[7.1( v 36'39 (0'0,36'39] local-lis/les=43/44 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=43/44 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,2,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[5.1b( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,4,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,4,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[5.1c( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,2,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,1,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,1,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[5.10( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[5.1f( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,5,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,2,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,2,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,1,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[32652]: osd.3 pg_epoch: 44 pg[6.1a( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,2,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=3}}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=43/44 n=0 ec=41/33 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=39/26 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,2,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,2,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Feb 23 02:58:35 localhost ceph-osd[31709]: osd.0 pg_epoch: 44 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=43/44 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Feb 23 02:58:36 localhost ceph-osd[31709]: osd.0 pg_epoch: 45 pg[7.6( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.786873817s) [4,5,0] r=2 lpr=45 pi=[41,45)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.551025391s@ mbc={}] start_peering_interval up [0,5,4] -> [4,5,0], acting [0,5,4] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:36 localhost ceph-osd[31709]: osd.0 pg_epoch: 45 pg[7.6( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.786811829s) [4,5,0] r=2 lpr=45 pi=[41,45)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1186.551025391s@ mbc={}] state: transitioning to Stray Feb 23 02:58:36 localhost ceph-osd[31709]: osd.0 pg_epoch: 45 pg[7.2( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.785044670s) [4,5,0] r=2 lpr=45 pi=[41,45)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.549438477s@ mbc={}] start_peering_interval up [0,5,4] -> [4,5,0], acting [0,5,4] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:36 localhost ceph-osd[31709]: osd.0 pg_epoch: 45 pg[7.2( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.784903526s) [4,5,0] r=2 lpr=45 pi=[41,45)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1186.549438477s@ mbc={}] state: transitioning to Stray Feb 23 02:58:36 localhost ceph-osd[31709]: osd.0 pg_epoch: 45 pg[7.a( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.784640312s) [4,5,0] r=2 lpr=45 pi=[41,45)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.549682617s@ mbc={}] start_peering_interval up [0,5,4] -> [4,5,0], acting [0,5,4] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:36 localhost ceph-osd[31709]: osd.0 pg_epoch: 45 pg[7.a( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.784605980s) [4,5,0] r=2 lpr=45 pi=[41,45)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1186.549682617s@ mbc={}] state: transitioning to Stray Feb 23 02:58:36 localhost ceph-osd[31709]: osd.0 pg_epoch: 45 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.783286095s) [4,5,0] r=2 lpr=45 pi=[41,45)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1186.549072266s@ mbc={}] start_peering_interval up [0,5,4] -> [4,5,0], acting [0,5,4] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:36 localhost ceph-osd[31709]: osd.0 pg_epoch: 45 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.783201218s) [4,5,0] r=2 lpr=45 pi=[41,45)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1186.549072266s@ mbc={}] state: transitioning to Stray Feb 23 02:58:36 localhost python3[56699]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:58:36 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts Feb 23 02:58:36 localhost python3[56742]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833516.215597-92678-82626224685538/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=b30b176c5dadfc33fbdfb5fdc77f69e2337fe39c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:58:37 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.1b scrub starts Feb 23 02:58:37 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.1b scrub ok Feb 23 02:58:42 localhost python3[56804]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:58:42 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.2 scrub starts Feb 23 02:58:42 localhost python3[56849]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833521.8797946-92994-107122214640140/source _original_basename=tmpz08hkjxj follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:58:43 localhost ceph-osd[31709]: osd.0 pg_epoch: 47 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47 pruub=15.891448021s) [1,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1198.778198242s@ mbc={255={}}] start_peering_interval up [0,2,4] -> [1,3,2], acting [0,2,4] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:43 localhost ceph-osd[31709]: osd.0 pg_epoch: 47 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=43/44 n=2 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47 pruub=15.881295204s) [1,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1198.768554688s@ mbc={255={}}] start_peering_interval up [0,2,4] -> [1,3,2], acting [0,2,4] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:43 localhost ceph-osd[31709]: osd.0 pg_epoch: 47 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=43/44 n=2 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47 pruub=15.881224632s) [1,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1198.768554688s@ mbc={}] state: transitioning to Stray Feb 23 02:58:43 localhost ceph-osd[31709]: osd.0 pg_epoch: 47 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47 pruub=15.891579628s) [1,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1198.779052734s@ mbc={255={}}] start_peering_interval up [0,2,4] -> [1,3,2], acting [0,2,4] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:43 localhost ceph-osd[31709]: osd.0 pg_epoch: 47 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47 pruub=15.891349792s) [1,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1198.778198242s@ mbc={}] state: transitioning to Stray Feb 23 02:58:43 localhost ceph-osd[31709]: osd.0 pg_epoch: 47 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47 pruub=15.891434669s) [1,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1198.779052734s@ mbc={}] state: transitioning to Stray Feb 23 02:58:43 localhost ceph-osd[31709]: osd.0 pg_epoch: 47 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47 pruub=15.899033546s) [1,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1198.786987305s@ mbc={255={}}] start_peering_interval up [0,2,4] -> [1,3,2], acting [0,2,4] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:43 localhost ceph-osd[31709]: osd.0 pg_epoch: 47 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47 pruub=15.898790359s) [1,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1198.786987305s@ mbc={}] state: transitioning to Stray Feb 23 02:58:43 localhost python3[56911]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:58:44 localhost python3[56954]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833523.4737499-93081-206210233592716/source _original_basename=tmpzam5xp9z follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:58:44 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.11 deep-scrub starts Feb 23 02:58:44 localhost ceph-osd[32652]: osd.3 pg_epoch: 47 pg[7.3( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47) [1,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:44 localhost ceph-osd[32652]: osd.3 pg_epoch: 47 pg[7.7( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47) [1,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:44 localhost ceph-osd[32652]: osd.3 pg_epoch: 47 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47) [1,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:44 localhost ceph-osd[32652]: osd.3 pg_epoch: 47 pg[7.b( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=47) [1,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:44 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.11 deep-scrub ok Feb 23 02:58:44 localhost python3[56984]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Feb 23 02:58:45 localhost python3[57002]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:58:45 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.e scrub starts Feb 23 02:58:45 localhost ceph-osd[31709]: osd.0 pg_epoch: 49 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=49 pruub=9.530529976s) [1,3,2] r=-1 lpr=49 pi=[41,49)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1194.549682617s@ mbc={}] start_peering_interval up [0,5,4] -> [1,3,2], acting [0,5,4] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:45 localhost ceph-osd[31709]: osd.0 pg_epoch: 49 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=49 pruub=9.530458450s) [1,3,2] r=-1 lpr=49 pi=[41,49)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1194.549682617s@ mbc={}] state: transitioning to Stray Feb 23 02:58:45 localhost ceph-osd[31709]: osd.0 pg_epoch: 49 pg[7.4( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=49 pruub=9.531682968s) [1,3,2] r=-1 lpr=49 pi=[41,49)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1194.551391602s@ mbc={}] start_peering_interval up [0,5,4] -> [1,3,2], acting [0,5,4] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:45 localhost ceph-osd[31709]: osd.0 pg_epoch: 49 pg[7.4( v 36'39 (0'0,36'39] local-lis/les=41/42 n=2 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=49 pruub=9.531559944s) [1,3,2] r=-1 lpr=49 pi=[41,49)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1194.551391602s@ mbc={}] state: transitioning to Stray Feb 23 02:58:46 localhost ceph-osd[32652]: osd.3 pg_epoch: 49 pg[7.4( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=49) [1,3,2] r=1 lpr=49 pi=[41,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:46 localhost ceph-osd[32652]: osd.3 pg_epoch: 49 pg[7.c( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=49) [1,3,2] r=1 lpr=49 pi=[41,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:58:46 localhost ansible-async_wrapper.py[57174]: Invoked with 722195998789 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833526.3982675-93443-245939309192711/AnsiballZ_command.py _ Feb 23 02:58:46 localhost ansible-async_wrapper.py[57177]: Starting module and watcher Feb 23 02:58:46 localhost ansible-async_wrapper.py[57177]: Start watching 57178 (3600) Feb 23 02:58:46 localhost ansible-async_wrapper.py[57178]: Start module (57178) Feb 23 02:58:46 localhost ansible-async_wrapper.py[57174]: Return async_wrapper task started. Feb 23 02:58:47 localhost python3[57198]: ansible-ansible.legacy.async_status Invoked with jid=722195998789.57174 mode=status _async_dir=/tmp/.ansible_async Feb 23 02:58:48 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.7 deep-scrub starts Feb 23 02:58:48 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.7 deep-scrub ok Feb 23 02:58:49 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 7.8 scrub starts Feb 23 02:58:50 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.d scrub starts Feb 23 02:58:50 localhost puppet-user[57196]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 02:58:50 localhost puppet-user[57196]: (file: /etc/puppet/hiera.yaml) Feb 23 02:58:50 localhost puppet-user[57196]: Warning: Undefined variable '::deploy_config_name'; Feb 23 02:58:50 localhost puppet-user[57196]: (file & line not available) Feb 23 02:58:50 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.d scrub ok Feb 23 02:58:50 localhost puppet-user[57196]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 02:58:50 localhost puppet-user[57196]: (file & line not available) Feb 23 02:58:50 localhost puppet-user[57196]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 23 02:58:50 localhost puppet-user[57196]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 23 02:58:50 localhost puppet-user[57196]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.12 seconds Feb 23 02:58:50 localhost puppet-user[57196]: Notice: Applied catalog in 0.04 seconds Feb 23 02:58:50 localhost puppet-user[57196]: Application: Feb 23 02:58:50 localhost puppet-user[57196]: Initial environment: production Feb 23 02:58:50 localhost puppet-user[57196]: Converged environment: production Feb 23 02:58:50 localhost puppet-user[57196]: Run mode: user Feb 23 02:58:50 localhost puppet-user[57196]: Changes: Feb 23 02:58:50 localhost puppet-user[57196]: Events: Feb 23 02:58:50 localhost puppet-user[57196]: Resources: Feb 23 02:58:50 localhost puppet-user[57196]: Total: 10 Feb 23 02:58:50 localhost puppet-user[57196]: Time: Feb 23 02:58:50 localhost puppet-user[57196]: Schedule: 0.00 Feb 23 02:58:50 localhost puppet-user[57196]: File: 0.00 Feb 23 02:58:50 localhost puppet-user[57196]: Exec: 0.01 Feb 23 02:58:50 localhost puppet-user[57196]: Augeas: 0.01 Feb 23 02:58:50 localhost puppet-user[57196]: Transaction evaluation: 0.03 Feb 23 02:58:50 localhost puppet-user[57196]: Catalog application: 0.04 Feb 23 02:58:50 localhost puppet-user[57196]: Config retrieval: 0.15 Feb 23 02:58:50 localhost puppet-user[57196]: Last run: 1771833530 Feb 23 02:58:50 localhost puppet-user[57196]: Filebucket: 0.00 Feb 23 02:58:50 localhost puppet-user[57196]: Total: 0.04 Feb 23 02:58:50 localhost puppet-user[57196]: Version: Feb 23 02:58:50 localhost puppet-user[57196]: Config: 1771833530 Feb 23 02:58:50 localhost puppet-user[57196]: Puppet: 7.10.0 Feb 23 02:58:50 localhost ansible-async_wrapper.py[57178]: Module complete (57178) Feb 23 02:58:51 localhost ansible-async_wrapper.py[57177]: Done in kid B. Feb 23 02:58:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:58:53 localhost podman[57309]: 2026-02-23 07:58:53.006789741 +0000 UTC m=+0.079296675 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=) Feb 23 02:58:53 localhost podman[57309]: 2026-02-23 07:58:53.196819489 +0000 UTC m=+0.269326363 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 02:58:53 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:58:53 localhost ceph-osd[31709]: osd.0 pg_epoch: 51 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=51 pruub=13.619361877s) [2,4,0] r=2 lpr=51 pi=[43,51)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1206.778320312s@ mbc={255={}}] start_peering_interval up [0,2,4] -> [2,4,0], acting [0,2,4] -> [2,4,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:53 localhost ceph-osd[31709]: osd.0 pg_epoch: 51 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=51 pruub=13.619229317s) [2,4,0] r=2 lpr=51 pi=[43,51)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1206.778320312s@ mbc={}] state: transitioning to Stray Feb 23 02:58:53 localhost ceph-osd[31709]: osd.0 pg_epoch: 51 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=43/44 n=2 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=51 pruub=13.628190994s) [2,4,0] r=2 lpr=51 pi=[43,51)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1206.787353516s@ mbc={255={}}] start_peering_interval up [0,2,4] -> [2,4,0], acting [0,2,4] -> [2,4,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:53 localhost ceph-osd[31709]: osd.0 pg_epoch: 51 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=43/44 n=2 ec=41/34 lis/c=43/43 les/c/f=44/46/0 sis=51 pruub=13.628157616s) [2,4,0] r=2 lpr=51 pi=[43,51)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1206.787353516s@ mbc={}] state: transitioning to Stray Feb 23 02:58:54 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts Feb 23 02:58:54 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok Feb 23 02:58:55 localhost ceph-osd[31709]: osd.0 pg_epoch: 53 pg[7.6( v 36'39 (0'0,36'39] local-lis/les=45/46 n=2 ec=41/34 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.600506783s) [1,0,5] r=1 lpr=53 pi=[45,53)/1 luod=0'0 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1208.828002930s@ mbc={}] start_peering_interval up [4,5,0] -> [1,0,5], acting [4,5,0] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:55 localhost ceph-osd[31709]: osd.0 pg_epoch: 53 pg[7.6( v 36'39 (0'0,36'39] local-lis/les=45/46 n=2 ec=41/34 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.600414276s) [1,0,5] r=1 lpr=53 pi=[45,53)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1208.828002930s@ mbc={}] state: transitioning to Stray Feb 23 02:58:55 localhost ceph-osd[31709]: osd.0 pg_epoch: 53 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=45/46 n=1 ec=41/34 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.595285416s) [1,0,5] r=1 lpr=53 pi=[45,53)/1 luod=0'0 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1208.823852539s@ mbc={}] start_peering_interval up [4,5,0] -> [1,0,5], acting [4,5,0] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:58:55 localhost ceph-osd[31709]: osd.0 pg_epoch: 53 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=45/46 n=1 ec=41/34 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.595223427s) [1,0,5] r=1 lpr=53 pi=[45,53)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1208.823852539s@ mbc={}] state: transitioning to Stray Feb 23 02:58:57 localhost python3[57482]: ansible-ansible.legacy.async_status Invoked with jid=722195998789.57174 mode=status _async_dir=/tmp/.ansible_async Feb 23 02:58:58 localhost python3[57498]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 02:58:58 localhost python3[57514]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:58:59 localhost python3[57564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:58:59 localhost python3[57582]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp_f_t_gh6 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 02:58:59 localhost python3[57612]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:59:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 02:59:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4481 writes, 20K keys, 4481 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4481 writes, 383 syncs, 11.70 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1221 writes, 4345 keys, 1221 commit groups, 1.0 writes per commit group, ingest: 1.79 MB, 0.00 MB/s#012Interval WAL: 1221 writes, 237 syncs, 5.15 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55907f80e2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55907f80e2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Feb 23 02:59:00 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.e scrub starts Feb 23 02:59:00 localhost python3[57716]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 23 02:59:01 localhost python3[57735]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:59:02 localhost python3[57767]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 02:59:03 localhost python3[57817]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:59:03 localhost python3[57835]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:59:03 localhost ceph-osd[32652]: osd.3 pg_epoch: 55 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=47/48 n=1 ec=41/34 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.903107643s) [3,5,1] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=36'39 mlcod 0'0 active pruub 1211.869140625s@ mbc={}] start_peering_interval up [1,3,2] -> [3,5,1], acting [1,3,2] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:03 localhost ceph-osd[32652]: osd.3 pg_epoch: 55 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=47/48 n=1 ec=41/34 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.903107643s) [3,5,1] r=0 lpr=55 pi=[47,55)/1 crt=36'39 mlcod 0'0 unknown pruub 1211.869140625s@ mbc={}] state: transitioning to Primary Feb 23 02:59:03 localhost ceph-osd[32652]: osd.3 pg_epoch: 55 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=47/48 n=1 ec=41/34 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.902780533s) [3,5,1] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=36'39 mlcod 0'0 active pruub 1211.869140625s@ mbc={}] start_peering_interval up [1,3,2] -> [3,5,1], acting [1,3,2] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:03 localhost ceph-osd[32652]: osd.3 pg_epoch: 55 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=47/48 n=1 ec=41/34 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.902780533s) [3,5,1] r=0 lpr=55 pi=[47,55)/1 crt=36'39 mlcod 0'0 unknown pruub 1211.869140625s@ mbc={}] state: transitioning to Primary Feb 23 02:59:03 localhost python3[57897]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:59:04 localhost python3[57915]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:59:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 02:59:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4705 writes, 21K keys, 4705 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4704 writes, 450 syncs, 10.45 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1313 writes, 4726 keys, 1313 commit groups, 1.0 writes per commit group, ingest: 1.93 MB, 0.00 MB/s#012Interval WAL: 1312 writes, 250 syncs, 5.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d239862d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d239862d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Feb 23 02:59:04 localhost python3[57977]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:59:04 localhost sshd[57980]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:59:04 localhost ceph-osd[32652]: osd.3 pg_epoch: 56 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=41/34 lis/c=47/47 les/c/f=48/48/0 sis=55) [3,5,1] r=0 lpr=55 pi=[47,55)/1 crt=36'39 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Feb 23 02:59:04 localhost ceph-osd[32652]: osd.3 pg_epoch: 56 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=41/34 lis/c=47/47 les/c/f=48/48/0 sis=55) [3,5,1] r=0 lpr=55 pi=[47,55)/1 crt=36'39 mlcod 0'0 active+degraded mbc={255={(2+1)=3}}] state: react AllReplicasActivated Activating complete Feb 23 02:59:04 localhost python3[57997]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:59:05 localhost python3[58059]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:59:05 localhost ceph-osd[31709]: osd.0 pg_epoch: 57 pg[7.8( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=57 pruub=13.224900246s) [1,0,5] r=1 lpr=57 pi=[41,57)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1218.550048828s@ mbc={}] start_peering_interval up [0,5,4] -> [1,0,5], acting [0,5,4] -> [1,0,5], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:05 localhost ceph-osd[31709]: osd.0 pg_epoch: 57 pg[7.8( v 36'39 (0'0,36'39] local-lis/les=41/42 n=1 ec=41/34 lis/c=41/41 les/c/f=42/42/0 sis=57 pruub=13.224705696s) [1,0,5] r=1 lpr=57 pi=[41,57)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1218.550048828s@ mbc={}] state: transitioning to Stray Feb 23 02:59:05 localhost python3[58077]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:59:06 localhost python3[58107]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:59:06 localhost systemd[1]: Reloading. Feb 23 02:59:06 localhost systemd-sysv-generator[58132]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:59:06 localhost systemd-rc-local-generator[58129]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:59:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:59:06 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 7.1 scrub starts Feb 23 02:59:07 localhost python3[58193]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:59:07 localhost python3[58211]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:59:07 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.8 scrub starts Feb 23 02:59:07 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.8 scrub ok Feb 23 02:59:07 localhost python3[58273]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 02:59:08 localhost python3[58291]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:59:08 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.15 deep-scrub starts Feb 23 02:59:08 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.15 deep-scrub ok Feb 23 02:59:08 localhost python3[58321]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 02:59:08 localhost systemd[1]: Reloading. Feb 23 02:59:08 localhost systemd-rc-local-generator[58348]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 02:59:08 localhost systemd-sysv-generator[58353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 02:59:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 02:59:09 localhost systemd[1]: Starting Create netns directory... Feb 23 02:59:09 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 23 02:59:09 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 23 02:59:09 localhost systemd[1]: Finished Create netns directory. Feb 23 02:59:09 localhost python3[58381]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 23 02:59:10 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 6.5 scrub starts Feb 23 02:59:10 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 6.5 scrub ok Feb 23 02:59:10 localhost python3[58439]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 23 02:59:11 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 6.a scrub starts Feb 23 02:59:11 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 6.a scrub ok Feb 23 02:59:11 localhost podman[58517]: 2026-02-23 07:59:11.265188036 +0000 UTC m=+0.071056130 container create a16906844e791c4dbf0894d185d2ca1e9522d16c49c14ce355ec56537a0bd34a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, container_name=nova_compute_init_log, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z) Feb 23 02:59:11 localhost podman[58511]: 2026-02-23 07:59:11.282598518 +0000 UTC m=+0.100712207 container create d2df035f922765f48cfb47a3749ce6ba2095902be5e1eebc2a1834739d460250 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_virtqemud_init_logs, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 02:59:11 localhost systemd[1]: Started libpod-conmon-a16906844e791c4dbf0894d185d2ca1e9522d16c49c14ce355ec56537a0bd34a.scope. Feb 23 02:59:11 localhost systemd[1]: Started libpod-conmon-d2df035f922765f48cfb47a3749ce6ba2095902be5e1eebc2a1834739d460250.scope. Feb 23 02:59:11 localhost podman[58517]: 2026-02-23 07:59:11.224174344 +0000 UTC m=+0.030042438 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 23 02:59:11 localhost podman[58511]: 2026-02-23 07:59:11.226309069 +0000 UTC m=+0.044422818 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 02:59:11 localhost systemd[1]: Started libcrun container. Feb 23 02:59:11 localhost systemd[1]: Started libcrun container. Feb 23 02:59:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1624cdf69c40b3be5ac9103e8da1e64c78a26cb80bd614a05a7389840fddb5/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Feb 23 02:59:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9b3ccf102020d83d96e7293cd5bee2fb82fa9fff75afdeaef37d9e3f6dc0e2/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Feb 23 02:59:11 localhost podman[58511]: 2026-02-23 07:59:11.360818677 +0000 UTC m=+0.178932376 container init d2df035f922765f48cfb47a3749ce6ba2095902be5e1eebc2a1834739d460250 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, container_name=nova_virtqemud_init_logs, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vcs-type=git) Feb 23 02:59:11 localhost podman[58511]: 2026-02-23 07:59:11.370811582 +0000 UTC m=+0.188925281 container start d2df035f922765f48cfb47a3749ce6ba2095902be5e1eebc2a1834739d460250 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, version=17.1.13, architecture=x86_64, container_name=nova_virtqemud_init_logs, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.5, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 02:59:11 localhost python3[58439]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1771832380 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Feb 23 02:59:11 localhost systemd[1]: libpod-d2df035f922765f48cfb47a3749ce6ba2095902be5e1eebc2a1834739d460250.scope: Deactivated successfully. Feb 23 02:59:11 localhost podman[58517]: 2026-02-23 07:59:11.413271649 +0000 UTC m=+0.219139733 container init a16906844e791c4dbf0894d185d2ca1e9522d16c49c14ce355ec56537a0bd34a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step2, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_compute_init_log, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 02:59:11 localhost podman[58517]: 2026-02-23 07:59:11.42477172 +0000 UTC m=+0.230639784 container start a16906844e791c4dbf0894d185d2ca1e9522d16c49c14ce355ec56537a0bd34a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, container_name=nova_compute_init_log, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step2, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Feb 23 02:59:11 localhost python3[58439]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1771832380 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Feb 23 02:59:11 localhost systemd[1]: libpod-a16906844e791c4dbf0894d185d2ca1e9522d16c49c14ce355ec56537a0bd34a.scope: Deactivated successfully. Feb 23 02:59:11 localhost podman[58554]: 2026-02-23 07:59:11.473048664 +0000 UTC m=+0.066812571 container died d2df035f922765f48cfb47a3749ce6ba2095902be5e1eebc2a1834739d460250 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, container_name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2) Feb 23 02:59:11 localhost podman[58571]: 2026-02-23 07:59:11.495652614 +0000 UTC m=+0.051135312 container died a16906844e791c4dbf0894d185d2ca1e9522d16c49c14ce355ec56537a0bd34a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, container_name=nova_compute_init_log, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step2) Feb 23 02:59:11 localhost podman[58572]: 2026-02-23 07:59:11.554736759 +0000 UTC m=+0.110853077 container cleanup a16906844e791c4dbf0894d185d2ca1e9522d16c49c14ce355ec56537a0bd34a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step2, container_name=nova_compute_init_log, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, release=1766032510, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 02:59:11 localhost systemd[1]: libpod-conmon-a16906844e791c4dbf0894d185d2ca1e9522d16c49c14ce355ec56537a0bd34a.scope: Deactivated successfully. Feb 23 02:59:11 localhost podman[58554]: 2026-02-23 07:59:11.602925991 +0000 UTC m=+0.196689878 container cleanup d2df035f922765f48cfb47a3749ce6ba2095902be5e1eebc2a1834739d460250 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, build-date=2026-01-12T23:31:49Z, release=1766032510, vcs-type=git, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible) Feb 23 02:59:11 localhost systemd[1]: libpod-conmon-d2df035f922765f48cfb47a3749ce6ba2095902be5e1eebc2a1834739d460250.scope: Deactivated successfully. Feb 23 02:59:11 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.16 scrub starts Feb 23 02:59:11 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.16 scrub ok Feb 23 02:59:11 localhost podman[58695]: 2026-02-23 07:59:11.981834952 +0000 UTC m=+0.081704146 container create 12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step2, container_name=create_haproxy_wrapper, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.expose-services=, release=1766032510, version=17.1.13, managed_by=tripleo_ansible) Feb 23 02:59:12 localhost systemd[1]: Started libpod-conmon-12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f.scope. Feb 23 02:59:12 localhost podman[58706]: 2026-02-23 07:59:12.020912636 +0000 UTC m=+0.090601058 container create 19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step2, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, name=rhosp-rhel9/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5, container_name=create_virtlogd_wrapper, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 02:59:12 localhost systemd[1]: Started libcrun container. Feb 23 02:59:12 localhost podman[58695]: 2026-02-23 07:59:11.936697574 +0000 UTC m=+0.036566798 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 23 02:59:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245c0303b83c6a08a9deade7a7562e9b402739ffa793ad370ef0b005a93182b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 02:59:12 localhost podman[58695]: 2026-02-23 07:59:12.050287082 +0000 UTC m=+0.150156276 container init 12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step2, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=create_haproxy_wrapper, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 02:59:12 localhost podman[58695]: 2026-02-23 07:59:12.058245146 +0000 UTC m=+0.158114370 container start 12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step2, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 02:59:12 localhost podman[58695]: 2026-02-23 07:59:12.058577586 +0000 UTC m=+0.158446800 container attach 12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, version=17.1.13, build-date=2026-01-12T22:56:19Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 02:59:12 localhost systemd[1]: Started libpod-conmon-19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e.scope. Feb 23 02:59:12 localhost systemd[1]: Started libcrun container. Feb 23 02:59:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1242a6f8cc9ca3bf76c1418c726d16e7e8dd37d29e45a6b8f00ae61143c1c4/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Feb 23 02:59:12 localhost podman[58706]: 2026-02-23 07:59:11.975175869 +0000 UTC m=+0.044864291 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 02:59:12 localhost podman[58706]: 2026-02-23 07:59:12.083738764 +0000 UTC m=+0.153427196 container init 19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, config_id=tripleo_step2, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2026-01-12T23:31:49Z, distribution-scope=public, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt) Feb 23 02:59:12 localhost podman[58706]: 2026-02-23 07:59:12.093854344 +0000 UTC m=+0.163542796 container start 19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, container_name=create_virtlogd_wrapper, org.opencontainers.image.created=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, name=rhosp-rhel9/openstack-nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_id=tripleo_step2, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 02:59:12 localhost podman[58706]: 2026-02-23 07:59:12.09439229 +0000 UTC m=+0.164080722 container attach 19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, distribution-scope=public, release=1766032510, container_name=create_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, build-date=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Feb 23 02:59:12 localhost systemd[1]: var-lib-containers-storage-overlay-2b1624cdf69c40b3be5ac9103e8da1e64c78a26cb80bd614a05a7389840fddb5-merged.mount: Deactivated successfully. Feb 23 02:59:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d2df035f922765f48cfb47a3749ce6ba2095902be5e1eebc2a1834739d460250-userdata-shm.mount: Deactivated successfully. Feb 23 02:59:12 localhost systemd[1]: var-lib-containers-storage-overlay-af9b3ccf102020d83d96e7293cd5bee2fb82fa9fff75afdeaef37d9e3f6dc0e2-merged.mount: Deactivated successfully. Feb 23 02:59:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a16906844e791c4dbf0894d185d2ca1e9522d16c49c14ce355ec56537a0bd34a-userdata-shm.mount: Deactivated successfully. Feb 23 02:59:13 localhost ovs-vsctl[58827]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Feb 23 02:59:13 localhost ceph-osd[31709]: osd.0 pg_epoch: 59 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=43/43 les/c/f=44/44/0 sis=59 pruub=9.286026955s) [4,2,3] r=-1 lpr=59 pi=[43,59)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1222.778930664s@ mbc={}] start_peering_interval up [0,2,4] -> [4,2,3], acting [0,2,4] -> [4,2,3], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:13 localhost ceph-osd[31709]: osd.0 pg_epoch: 59 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=43/44 n=1 ec=41/34 lis/c=43/43 les/c/f=44/44/0 sis=59 pruub=9.284759521s) [4,2,3] r=-1 lpr=59 pi=[43,59)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1222.778930664s@ mbc={}] state: transitioning to Stray Feb 23 02:59:14 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 6.d deep-scrub starts Feb 23 02:59:14 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 6.d deep-scrub ok Feb 23 02:59:14 localhost systemd[1]: libpod-19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e.scope: Deactivated successfully. Feb 23 02:59:14 localhost systemd[1]: libpod-19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e.scope: Consumed 2.147s CPU time. Feb 23 02:59:14 localhost podman[58706]: 2026-02-23 07:59:14.250096084 +0000 UTC m=+2.319784516 container died 19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, version=17.1.13, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=create_virtlogd_wrapper, architecture=x86_64, config_id=tripleo_step2, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 02:59:14 localhost systemd[1]: tmp-crun.NN8w3p.mount: Deactivated successfully. Feb 23 02:59:14 localhost podman[58949]: 2026-02-23 07:59:14.316271694 +0000 UTC m=+0.057869018 container cleanup 19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, org.opencontainers.image.created=2026-01-12T23:31:49Z, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, container_name=create_virtlogd_wrapper, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=) Feb 23 02:59:14 localhost systemd[1]: libpod-conmon-19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e.scope: Deactivated successfully. Feb 23 02:59:14 localhost python3[58439]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1771832380 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Feb 23 02:59:14 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.15 scrub starts Feb 23 02:59:14 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.15 scrub ok Feb 23 02:59:14 localhost systemd[1]: libpod-12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f.scope: Deactivated successfully. Feb 23 02:59:14 localhost systemd[1]: libpod-12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f.scope: Consumed 2.041s CPU time. Feb 23 02:59:14 localhost podman[58989]: 2026-02-23 07:59:14.950572075 +0000 UTC m=+0.050457552 container died 12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=create_haproxy_wrapper, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step2) Feb 23 02:59:14 localhost podman[58989]: 2026-02-23 07:59:14.980406876 +0000 UTC m=+0.080292313 container cleanup 12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step2, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, version=17.1.13, container_name=create_haproxy_wrapper, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 02:59:14 localhost systemd[1]: libpod-conmon-12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f.scope: Deactivated successfully. Feb 23 02:59:14 localhost python3[58439]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Feb 23 02:59:15 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 4.c scrub starts Feb 23 02:59:15 localhost ceph-osd[32652]: osd.3 pg_epoch: 59 pg[7.9( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=43/43 les/c/f=44/44/0 sis=59) [4,2,3] r=2 lpr=59 pi=[43,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:59:15 localhost systemd[1]: var-lib-containers-storage-overlay-da1242a6f8cc9ca3bf76c1418c726d16e7e8dd37d29e45a6b8f00ae61143c1c4-merged.mount: Deactivated successfully. Feb 23 02:59:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19fffbab9dbf3da87e5f88142cd4a502f7f249b4db21a0298d6db856300eee8e-userdata-shm.mount: Deactivated successfully. Feb 23 02:59:15 localhost systemd[1]: var-lib-containers-storage-overlay-8245c0303b83c6a08a9deade7a7562e9b402739ffa793ad370ef0b005a93182b-merged.mount: Deactivated successfully. Feb 23 02:59:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12d45af35ecc4b445904bb9b45c985d7fa1102531ad607e3fa8690783d46132f-userdata-shm.mount: Deactivated successfully. Feb 23 02:59:15 localhost python3[59041]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:59:16 localhost ceph-osd[31709]: osd.0 pg_epoch: 61 pg[7.a( v 36'39 (0'0,36'39] local-lis/les=45/46 n=1 ec=41/34 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=9.284921646s) [2,4,3] r=-1 lpr=61 pi=[45,61)/1 luod=0'0 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1224.828735352s@ mbc={}] start_peering_interval up [4,5,0] -> [2,4,3], acting [4,5,0] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:16 localhost ceph-osd[31709]: osd.0 pg_epoch: 61 pg[7.a( v 36'39 (0'0,36'39] local-lis/les=45/46 n=1 ec=41/34 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=9.284873009s) [2,4,3] r=-1 lpr=61 pi=[45,61)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1224.828735352s@ mbc={}] state: transitioning to Stray Feb 23 02:59:17 localhost python3[59162]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005626465 step=2 update_config_hash_only=False Feb 23 02:59:17 localhost ceph-osd[32652]: osd.3 pg_epoch: 61 pg[7.a( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=45/45 les/c/f=46/46/0 sis=61) [2,4,3] r=2 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:59:17 localhost python3[59178]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 02:59:17 localhost python3[59194]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 23 02:59:18 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 4.a scrub starts Feb 23 02:59:18 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 4.a scrub ok Feb 23 02:59:20 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts Feb 23 02:59:20 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok Feb 23 02:59:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:59:24 localhost systemd[1]: tmp-crun.4aRKBX.mount: Deactivated successfully. Feb 23 02:59:24 localhost podman[59195]: 2026-02-23 07:59:24.021546807 +0000 UTC m=+0.095372124 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, container_name=metrics_qdr, vcs-type=git, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 02:59:24 localhost ceph-osd[31709]: osd.0 64 crush map has features 432629239337189376, adjusting msgr requires for clients Feb 23 02:59:24 localhost ceph-osd[31709]: osd.0 64 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons Feb 23 02:59:24 localhost ceph-osd[31709]: osd.0 64 crush map has features 3314933000854323200, adjusting msgr requires for osds Feb 23 02:59:24 localhost ceph-osd[31709]: osd.0 pg_epoch: 64 pg[4.1( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=43/43 les/c/f=44/44/0 sis=64 pruub=15.050092697s) [2,0,4] r=1 lpr=64 pi=[43,64)/1 crt=0'0 mlcod 0'0 active pruub 1238.774047852s@ mbc={}] start_peering_interval up [2,0,1] -> [2,0,4], acting [2,0,1] -> [2,0,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:24 localhost ceph-osd[31709]: osd.0 pg_epoch: 64 pg[4.1( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=43/43 les/c/f=44/44/0 sis=64 pruub=15.049968719s) [2,0,4] r=1 lpr=64 pi=[43,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1238.774047852s@ mbc={}] state: transitioning to Stray Feb 23 02:59:24 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 6.19 scrub starts Feb 23 02:59:24 localhost ceph-osd[31709]: osd.0 pg_epoch: 64 pg[7.c( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [0,1,2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:59:24 localhost ceph-osd[32652]: osd.3 64 crush map has features 432629239337189376, adjusting msgr requires for clients Feb 23 02:59:24 localhost ceph-osd[32652]: osd.3 64 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons Feb 23 02:59:24 localhost ceph-osd[32652]: osd.3 64 crush map has features 3314933000854323200, adjusting msgr requires for osds Feb 23 02:59:24 localhost ceph-osd[32652]: osd.3 pg_epoch: 64 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=49/50 n=1 ec=41/34 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=10.365221977s) [0,1,2] r=-1 lpr=64 pi=[49,64)/1 luod=0'0 crt=36'39 mlcod 0'0 active pruub 1229.978393555s@ mbc={}] start_peering_interval up [1,3,2] -> [0,1,2], acting [1,3,2] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:24 localhost ceph-osd[32652]: osd.3 pg_epoch: 64 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=49/50 n=1 ec=41/34 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=10.365137100s) [0,1,2] r=-1 lpr=64 pi=[49,64)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1229.978393555s@ mbc={}] state: transitioning to Stray Feb 23 02:59:24 localhost ceph-osd[32652]: osd.3 pg_epoch: 64 pg[4.1b( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=43/43 les/c/f=44/44/0 sis=64 pruub=15.031676292s) [3,4,5] r=0 lpr=64 pi=[43,64)/1 crt=0'0 mlcod 0'0 active pruub 1234.646850586s@ mbc={}] start_peering_interval up [3,1,5] -> [3,4,5], acting [3,1,5] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:24 localhost ceph-osd[32652]: osd.3 pg_epoch: 64 pg[4.1b( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=43/43 les/c/f=44/44/0 sis=64 pruub=15.031676292s) [3,4,5] r=0 lpr=64 pi=[43,64)/1 crt=0'0 mlcod 0'0 unknown pruub 1234.646850586s@ mbc={}] state: transitioning to Primary Feb 23 02:59:24 localhost podman[59195]: 2026-02-23 07:59:24.244072452 +0000 UTC m=+0.317897769 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, config_id=tripleo_step1) Feb 23 02:59:24 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:59:24 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 6.19 scrub ok Feb 23 02:59:25 localhost ceph-osd[31709]: osd.0 pg_epoch: 65 pg[7.c( v 36'39 lc 36'17 (0'0,36'39] local-lis/les=64/65 n=1 ec=41/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [0,1,2] r=0 lpr=64 pi=[49,64)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Feb 23 02:59:25 localhost ceph-osd[32652]: osd.3 pg_epoch: 65 pg[4.1b( empty local-lis/les=64/65 n=0 ec=39/24 lis/c=43/43 les/c/f=44/44/0 sis=64) [3,4,5] r=0 lpr=64 pi=[43,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 23 02:59:26 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 4.13 scrub starts Feb 23 02:59:26 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 4.13 scrub ok Feb 23 02:59:26 localhost ceph-osd[31709]: osd.0 pg_epoch: 66 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=51/52 n=1 ec=41/34 lis/c=51/51 les/c/f=52/52/0 sis=66 pruub=8.439955711s) [3,4,5] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1234.219360352s@ mbc={}] start_peering_interval up [2,4,0] -> [3,4,5], acting [2,4,0] -> [3,4,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:26 localhost ceph-osd[31709]: osd.0 pg_epoch: 66 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=51/52 n=1 ec=41/34 lis/c=51/51 les/c/f=52/52/0 sis=66 pruub=8.439832687s) [3,4,5] r=-1 lpr=66 pi=[51,66)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1234.219360352s@ mbc={}] state: transitioning to Stray Feb 23 02:59:26 localhost ceph-osd[32652]: osd.3 pg_epoch: 66 pg[7.d( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=51/51 les/c/f=52/52/0 sis=66) [3,4,5] r=0 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 23 02:59:28 localhost ceph-osd[32652]: osd.3 pg_epoch: 67 pg[7.d( v 36'39 lc 36'13 (0'0,36'39] local-lis/les=66/67 n=1 ec=41/34 lis/c=51/51 les/c/f=52/52/0 sis=66) [3,4,5] r=0 lpr=66 pi=[51,66)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Feb 23 02:59:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 68 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=53/54 n=1 ec=41/34 lis/c=53/53 les/c/f=54/54/0 sis=68 pruub=8.695178986s) [1,5,3] r=-1 lpr=68 pi=[53,68)/1 luod=0'0 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1236.491821289s@ mbc={}] start_peering_interval up [1,0,5] -> [1,5,3], acting [1,0,5] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:28 localhost ceph-osd[31709]: osd.0 pg_epoch: 68 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=53/54 n=1 ec=41/34 lis/c=53/53 les/c/f=54/54/0 sis=68 pruub=8.695054054s) [1,5,3] r=-1 lpr=68 pi=[53,68)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1236.491821289s@ mbc={}] state: transitioning to Stray Feb 23 02:59:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 68 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/34 lis/c=53/53 les/c/f=54/54/0 sis=68) [1,5,3] r=2 lpr=68 pi=[53,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 23 02:59:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 69 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=41/34 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=15.562392235s) [1,5,3] r=2 lpr=69 pi=[55,69)/1 crt=36'39 mlcod 0'0 active pruub 1240.289428711s@ mbc={255={}}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 23 02:59:29 localhost ceph-osd[32652]: osd.3 pg_epoch: 69 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=41/34 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=15.562210083s) [1,5,3] r=2 lpr=69 pi=[55,69)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1240.289428711s@ mbc={}] state: transitioning to Stray Feb 23 02:59:29 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts Feb 23 02:59:29 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok Feb 23 02:59:30 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.10 scrub starts Feb 23 02:59:30 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.10 scrub ok Feb 23 02:59:31 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.f scrub starts Feb 23 02:59:31 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.f scrub ok Feb 23 02:59:34 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.1c scrub starts Feb 23 02:59:34 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.1c scrub ok Feb 23 02:59:37 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.9 scrub starts Feb 23 02:59:37 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.9 scrub ok Feb 23 02:59:38 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.18 scrub starts Feb 23 02:59:38 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.18 scrub ok Feb 23 02:59:39 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.2 scrub starts Feb 23 02:59:39 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.2 scrub ok Feb 23 02:59:39 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.1 scrub starts Feb 23 02:59:39 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.1 scrub ok Feb 23 02:59:41 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.1b scrub starts Feb 23 02:59:41 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 5.1b scrub ok Feb 23 02:59:42 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.7 scrub starts Feb 23 02:59:42 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 5.7 scrub ok Feb 23 02:59:45 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.18 scrub starts Feb 23 02:59:45 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.18 scrub ok Feb 23 02:59:46 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.1a deep-scrub starts Feb 23 02:59:46 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.1a deep-scrub ok Feb 23 02:59:47 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts Feb 23 02:59:47 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok Feb 23 02:59:47 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 7.1 scrub starts Feb 23 02:59:48 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 7.1 scrub ok Feb 23 02:59:48 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 7.7 scrub starts Feb 23 02:59:48 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 7.7 scrub ok Feb 23 02:59:50 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 7.d scrub starts Feb 23 02:59:50 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 7.d scrub ok Feb 23 02:59:50 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 4.c scrub starts Feb 23 02:59:51 localhost ceph-osd[31709]: log_channel(cluster) log [DBG] : 4.c scrub ok Feb 23 02:59:52 localhost sshd[59224]: main: sshd: ssh-rsa algorithm is disabled Feb 23 02:59:52 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.14 scrub starts Feb 23 02:59:52 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 2.14 scrub ok Feb 23 02:59:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 02:59:54 localhost systemd[1]: tmp-crun.9tCXDf.mount: Deactivated successfully. Feb 23 02:59:54 localhost podman[59226]: 2026-02-23 07:59:54.993489775 +0000 UTC m=+0.063809550 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 23 02:59:55 localhost podman[59226]: 2026-02-23 07:59:55.162493256 +0000 UTC m=+0.232812981 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, config_id=tripleo_step1, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5) Feb 23 02:59:55 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 02:59:57 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.1a scrub starts Feb 23 02:59:57 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.1a scrub ok Feb 23 02:59:58 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.2 scrub starts Feb 23 02:59:58 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.2 scrub ok Feb 23 02:59:59 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts Feb 23 02:59:59 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok Feb 23 03:00:00 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.e scrub starts Feb 23 03:00:00 localhost ceph-osd[32652]: log_channel(cluster) log [DBG] : 6.e scrub ok Feb 23 03:00:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:00:26 localhost podman[59331]: 2026-02-23 08:00:26.006418463 +0000 UTC m=+0.079477148 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 23 03:00:26 localhost podman[59331]: 2026-02-23 08:00:26.181908662 +0000 UTC m=+0.254967327 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:00:26 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:00:42 localhost sshd[59358]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:00:48 localhost sshd[59360]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:00:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:00:56 localhost podman[59362]: 2026-02-23 08:00:56.994350153 +0000 UTC m=+0.072978710 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public) Feb 23 03:00:57 localhost podman[59362]: 2026-02-23 08:00:57.180749015 +0000 UTC m=+0.259377542 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2026-01-12T22:10:14Z, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:00:57 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:00:58 localhost podman[59494]: 2026-02-23 08:00:58.369751937 +0000 UTC m=+0.081754958 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, version=7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 03:00:58 localhost podman[59494]: 2026-02-23 08:00:58.467387369 +0000 UTC m=+0.179390380 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, ceph=True, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vcs-type=git, maintainer=Guillaume Abrioux , RELEASE=main) Feb 23 03:01:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:01:28 localhost podman[59651]: 2026-02-23 08:01:28.014182642 +0000 UTC m=+0.084318273 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:01:28 localhost podman[59651]: 2026-02-23 08:01:28.200875273 +0000 UTC m=+0.271010894 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., config_id=tripleo_step1, container_name=metrics_qdr, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, io.openshift.expose-services=, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-type=git) Feb 23 03:01:28 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:01:32 localhost sshd[59679]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:01:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:01:59 localhost systemd[1]: tmp-crun.pJSART.mount: Deactivated successfully. Feb 23 03:01:59 localhost podman[59681]: 2026-02-23 08:01:59.024716965 +0000 UTC m=+0.102704540 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.5) Feb 23 03:01:59 localhost podman[59681]: 2026-02-23 08:01:59.206790514 +0000 UTC m=+0.284778029 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, release=1766032510, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:01:59 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:02:23 localhost sshd[59790]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:02:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:02:32 localhost systemd[1]: tmp-crun.6zGoeh.mount: Deactivated successfully. Feb 23 03:02:32 localhost podman[59792]: 2026-02-23 08:02:32.008179101 +0000 UTC m=+2.091994048 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Feb 23 03:02:32 localhost podman[59792]: 2026-02-23 08:02:32.167636322 +0000 UTC m=+2.251451269 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr) Feb 23 03:02:32 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:03:03 localhost systemd[1]: tmp-crun.qS0HA2.mount: Deactivated successfully. Feb 23 03:03:03 localhost podman[59882]: 2026-02-23 08:03:03.007727944 +0000 UTC m=+0.075562372 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:03:03 localhost podman[59882]: 2026-02-23 08:03:03.228006823 +0000 UTC m=+0.295841261 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:03:03 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:03:11 localhost sshd[59926]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:03:33 localhost podman[59928]: 2026-02-23 08:03:33.987497003 +0000 UTC m=+0.066045483 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, container_name=metrics_qdr, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Feb 23 03:03:34 localhost podman[59928]: 2026-02-23 08:03:34.182922716 +0000 UTC m=+0.261471236 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, version=17.1.13, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 23 03:03:34 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:03:56 localhost python3[60004]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:03:57 localhost python3[60049]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833836.6021166-99600-220406904231244/source _original_basename=tmphiw8j2my follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:03:58 localhost python3[60079]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:03:58 localhost sshd[60082]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:04:00 localhost ansible-async_wrapper.py[60253]: Invoked with 444241109524 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833839.5135696-99754-140958038790023/AnsiballZ_command.py _ Feb 23 03:04:00 localhost ansible-async_wrapper.py[60256]: Starting module and watcher Feb 23 03:04:00 localhost ansible-async_wrapper.py[60256]: Start watching 60257 (3600) Feb 23 03:04:00 localhost ansible-async_wrapper.py[60257]: Start module (60257) Feb 23 03:04:00 localhost ansible-async_wrapper.py[60253]: Return async_wrapper task started. Feb 23 03:04:00 localhost python3[60277]: ansible-ansible.legacy.async_status Invoked with jid=444241109524.60253 mode=status _async_dir=/tmp/.ansible_async Feb 23 03:04:03 localhost puppet-user[60269]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 03:04:03 localhost puppet-user[60269]: (file: /etc/puppet/hiera.yaml) Feb 23 03:04:03 localhost puppet-user[60269]: Warning: Undefined variable '::deploy_config_name'; Feb 23 03:04:03 localhost puppet-user[60269]: (file & line not available) Feb 23 03:04:03 localhost puppet-user[60269]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 03:04:03 localhost puppet-user[60269]: (file & line not available) Feb 23 03:04:03 localhost puppet-user[60269]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 23 03:04:03 localhost puppet-user[60269]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 23 03:04:03 localhost puppet-user[60269]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.11 seconds Feb 23 03:04:03 localhost puppet-user[60269]: Notice: Applied catalog in 0.03 seconds Feb 23 03:04:03 localhost puppet-user[60269]: Application: Feb 23 03:04:03 localhost puppet-user[60269]: Initial environment: production Feb 23 03:04:03 localhost puppet-user[60269]: Converged environment: production Feb 23 03:04:03 localhost puppet-user[60269]: Run mode: user Feb 23 03:04:03 localhost puppet-user[60269]: Changes: Feb 23 03:04:03 localhost puppet-user[60269]: Events: Feb 23 03:04:03 localhost puppet-user[60269]: Resources: Feb 23 03:04:03 localhost puppet-user[60269]: Total: 10 Feb 23 03:04:03 localhost puppet-user[60269]: Time: Feb 23 03:04:03 localhost puppet-user[60269]: Schedule: 0.00 Feb 23 03:04:03 localhost puppet-user[60269]: File: 0.00 Feb 23 03:04:03 localhost puppet-user[60269]: Exec: 0.01 Feb 23 03:04:03 localhost puppet-user[60269]: Augeas: 0.01 Feb 23 03:04:03 localhost puppet-user[60269]: Transaction evaluation: 0.03 Feb 23 03:04:03 localhost puppet-user[60269]: Catalog application: 0.03 Feb 23 03:04:03 localhost puppet-user[60269]: Config retrieval: 0.14 Feb 23 03:04:03 localhost puppet-user[60269]: Last run: 1771833843 Feb 23 03:04:03 localhost puppet-user[60269]: Filebucket: 0.00 Feb 23 03:04:03 localhost puppet-user[60269]: Total: 0.04 Feb 23 03:04:03 localhost puppet-user[60269]: Version: Feb 23 03:04:03 localhost puppet-user[60269]: Config: 1771833843 Feb 23 03:04:03 localhost puppet-user[60269]: Puppet: 7.10.0 Feb 23 03:04:03 localhost ansible-async_wrapper.py[60257]: Module complete (60257) Feb 23 03:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:04:04 localhost systemd[1]: tmp-crun.YcMSPK.mount: Deactivated successfully. Feb 23 03:04:04 localhost podman[60464]: 2026-02-23 08:04:04.620758581 +0000 UTC m=+0.067840577 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, architecture=x86_64, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:04:04 localhost podman[60464]: 2026-02-23 08:04:04.808922053 +0000 UTC m=+0.256004059 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13) Feb 23 03:04:04 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:04:05 localhost ansible-async_wrapper.py[60256]: Done in kid B. Feb 23 03:04:10 localhost python3[60509]: ansible-ansible.legacy.async_status Invoked with jid=444241109524.60253 mode=status _async_dir=/tmp/.ansible_async Feb 23 03:04:11 localhost python3[60525]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 03:04:11 localhost python3[60541]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:12 localhost python3[60591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:04:12 localhost python3[60609]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpaprphr50 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 03:04:13 localhost python3[60639]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:14 localhost python3[60742]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 23 03:04:15 localhost python3[60761]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:17 localhost python3[60793]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:17 localhost python3[60843]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:04:17 localhost python3[60861]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:18 localhost python3[60923]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:04:18 localhost python3[60941]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:19 localhost python3[61003]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:04:19 localhost python3[61021]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:20 localhost python3[61083]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:04:20 localhost python3[61101]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:20 localhost python3[61131]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:20 localhost systemd[1]: Reloading. Feb 23 03:04:21 localhost systemd-rc-local-generator[61158]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:21 localhost systemd-sysv-generator[61161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:21 localhost python3[61216]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:04:22 localhost python3[61234]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:22 localhost python3[61296]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:04:22 localhost python3[61314]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:23 localhost python3[61344]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:23 localhost systemd[1]: Reloading. Feb 23 03:04:23 localhost systemd-rc-local-generator[61366]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:23 localhost systemd-sysv-generator[61370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:23 localhost systemd[1]: Starting Create netns directory... Feb 23 03:04:23 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 23 03:04:23 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 23 03:04:23 localhost systemd[1]: Finished Create netns directory. Feb 23 03:04:24 localhost python3[61402]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 23 03:04:27 localhost python3[61460]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 23 03:04:27 localhost podman[61603]: 2026-02-23 08:04:27.26852478 +0000 UTC m=+0.072842090 container create b9ff61d4de113cca1d2d08aa52599247bf501ac3655f037bace36a29bd599eb7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, config_id=tripleo_step3, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:04:27 localhost podman[61611]: 2026-02-23 08:04:27.272810181 +0000 UTC m=+0.067485437 container create 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:09Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, version=17.1.13, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp-rhel9/openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Feb 23 03:04:27 localhost podman[61612]: 2026-02-23 08:04:27.295858342 +0000 UTC m=+0.088512117 container create 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, url=https://www.redhat.com, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:04:27 localhost podman[61634]: 2026-02-23 08:04:27.308010652 +0000 UTC m=+0.084336650 container create c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, release=1766032510, container_name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp-rhel9/openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13) Feb 23 03:04:27 localhost systemd[1]: Started libpod-conmon-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18.scope. Feb 23 03:04:27 localhost podman[61659]: 2026-02-23 08:04:27.326364952 +0000 UTC m=+0.066097145 container create d06dbff4b1be2facc48bbb74d84fd7caeb69ff0da94acb5f9fa218076adbb168 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, container_name=nova_statedir_owner, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, version=17.1.13) Feb 23 03:04:27 localhost podman[61603]: 2026-02-23 08:04:27.227708197 +0000 UTC m=+0.032025527 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 23 03:04:27 localhost podman[61611]: 2026-02-23 08:04:27.231980536 +0000 UTC m=+0.026655802 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 23 03:04:27 localhost podman[61612]: 2026-02-23 08:04:27.234034729 +0000 UTC m=+0.026688534 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 23 03:04:27 localhost systemd[1]: Started libcrun container. Feb 23 03:04:27 localhost systemd[1]: Started libpod-conmon-c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e.scope. Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost systemd[1]: Started libcrun container. Feb 23 03:04:27 localhost systemd[1]: Started libpod-conmon-d06dbff4b1be2facc48bbb74d84fd7caeb69ff0da94acb5f9fa218076adbb168.scope. Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f517692be756bbbec8b52ba00fac8538d0b4cc258170a641ad09cd15a7f1f00b/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f517692be756bbbec8b52ba00fac8538d0b4cc258170a641ad09cd15a7f1f00b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f517692be756bbbec8b52ba00fac8538d0b4cc258170a641ad09cd15a7f1f00b/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f517692be756bbbec8b52ba00fac8538d0b4cc258170a641ad09cd15a7f1f00b/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f517692be756bbbec8b52ba00fac8538d0b4cc258170a641ad09cd15a7f1f00b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f517692be756bbbec8b52ba00fac8538d0b4cc258170a641ad09cd15a7f1f00b/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f517692be756bbbec8b52ba00fac8538d0b4cc258170a641ad09cd15a7f1f00b/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost podman[61611]: 2026-02-23 08:04:27.350200297 +0000 UTC m=+0.144875553 container init 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=rsyslog, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-rsyslog, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:09Z, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public) Feb 23 03:04:27 localhost podman[61634]: 2026-02-23 08:04:27.352954511 +0000 UTC m=+0.129280499 container init c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=nova_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:04:27 localhost systemd[1]: Started libcrun container. Feb 23 03:04:27 localhost podman[61634]: 2026-02-23 08:04:27.255724 +0000 UTC m=+0.032049998 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b64c7c11c525f0f08b1c3a64b5cc69dc38aba5ef31b5a322497f8cdde7fe737/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost podman[61611]: 2026-02-23 08:04:27.358351646 +0000 UTC m=+0.153026932 container start 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:09Z, name=rhosp-rhel9/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, release=1766032510, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b64c7c11c525f0f08b1c3a64b5cc69dc38aba5ef31b5a322497f8cdde7fe737/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b64c7c11c525f0f08b1c3a64b5cc69dc38aba5ef31b5a322497f8cdde7fe737/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=eeb65e0a12c94af5b1e666d55df1d6ee --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 23 03:04:27 localhost podman[61634]: 2026-02-23 08:04:27.364327998 +0000 UTC m=+0.140653986 container start c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, architecture=x86_64, build-date=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, config_id=tripleo_step3, release=1766032510, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true) Feb 23 03:04:27 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8e86b11aed37635c57249fefb951044 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:27 localhost systemd[1]: Started libpod-conmon-1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.scope. Feb 23 03:04:27 localhost podman[61659]: 2026-02-23 08:04:27.370474096 +0000 UTC m=+0.110206299 container init d06dbff4b1be2facc48bbb74d84fd7caeb69ff0da94acb5f9fa218076adbb168 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_statedir_owner, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, release=1766032510, config_id=tripleo_step3) Feb 23 03:04:27 localhost systemd[1]: Started libcrun container. Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c78c509d85cf7de25ab03c4f1696d08f61fbbe2c31ef788a4d3ccaab70010a/merged/scripts supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c78c509d85cf7de25ab03c4f1696d08f61fbbe2c31ef788a4d3ccaab70010a/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost systemd[1]: Started libpod-conmon-b9ff61d4de113cca1d2d08aa52599247bf501ac3655f037bace36a29bd599eb7.scope. Feb 23 03:04:27 localhost systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring. Feb 23 03:04:27 localhost podman[61659]: 2026-02-23 08:04:27.299364969 +0000 UTC m=+0.039097172 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 23 03:04:27 localhost systemd[1]: Started libcrun container. Feb 23 03:04:27 localhost systemd[1]: Created slice User Slice of UID 0. Feb 23 03:04:27 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131bb1452fdde8cff761e538586587f172ee767b4fb401cc162049cabeab656b/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost systemd[1]: libpod-d06dbff4b1be2facc48bbb74d84fd7caeb69ff0da94acb5f9fa218076adbb168.scope: Deactivated successfully. Feb 23 03:04:27 localhost podman[61659]: 2026-02-23 08:04:27.428675578 +0000 UTC m=+0.168407771 container start d06dbff4b1be2facc48bbb74d84fd7caeb69ff0da94acb5f9fa218076adbb168 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_statedir_owner, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 23 03:04:27 localhost podman[61659]: 2026-02-23 08:04:27.428916766 +0000 UTC m=+0.168648979 container attach d06dbff4b1be2facc48bbb74d84fd7caeb69ff0da94acb5f9fa218076adbb168 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, architecture=x86_64, container_name=nova_statedir_owner, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step3, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:04:27 localhost podman[61659]: 2026-02-23 08:04:27.431560966 +0000 UTC m=+0.171293189 container died d06dbff4b1be2facc48bbb74d84fd7caeb69ff0da94acb5f9fa218076adbb168 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_statedir_owner, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, architecture=x86_64) Feb 23 03:04:27 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 23 03:04:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:04:27 localhost podman[61612]: 2026-02-23 08:04:27.435941209 +0000 UTC m=+0.228594984 container init 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.13, managed_by=tripleo_ansible, container_name=collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Feb 23 03:04:27 localhost systemd[1]: Starting User Manager for UID 0... Feb 23 03:04:27 localhost systemd[1]: libpod-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18.scope: Deactivated successfully. Feb 23 03:04:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:04:27 localhost systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring. Feb 23 03:04:27 localhost podman[61612]: 2026-02-23 08:04:27.464363025 +0000 UTC m=+0.257016800 container start 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vendor=Red Hat, Inc.) Feb 23 03:04:27 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=4767aaabc3de112d8791c290aa2b669d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 23 03:04:27 localhost podman[61603]: 2026-02-23 08:04:27.482863799 +0000 UTC m=+0.287181109 container init b9ff61d4de113cca1d2d08aa52599247bf501ac3655f037bace36a29bd599eb7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_init_log, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.openshift.expose-services=, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.13, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, distribution-scope=public) Feb 23 03:04:27 localhost podman[61603]: 2026-02-23 08:04:27.493412851 +0000 UTC m=+0.297730161 container start b9ff61d4de113cca1d2d08aa52599247bf501ac3655f037bace36a29bd599eb7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible) Feb 23 03:04:27 localhost systemd[1]: libpod-b9ff61d4de113cca1d2d08aa52599247bf501ac3655f037bace36a29bd599eb7.scope: Deactivated successfully. Feb 23 03:04:27 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Feb 23 03:04:27 localhost podman[61760]: 2026-02-23 08:04:27.522522177 +0000 UTC m=+0.062927337 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, container_name=collectd, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:04:27 localhost podman[61739]: 2026-02-23 08:04:27.563422443 +0000 UTC m=+0.126679860 container cleanup d06dbff4b1be2facc48bbb74d84fd7caeb69ff0da94acb5f9fa218076adbb168 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, container_name=nova_statedir_owner, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:04:27 localhost systemd[1]: libpod-conmon-d06dbff4b1be2facc48bbb74d84fd7caeb69ff0da94acb5f9fa218076adbb168.scope: Deactivated successfully. Feb 23 03:04:27 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1771832380 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Feb 23 03:04:27 localhost systemd[61741]: Queued start job for default target Main User Target. Feb 23 03:04:27 localhost systemd[61741]: Created slice User Application Slice. Feb 23 03:04:27 localhost systemd[61741]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 23 03:04:27 localhost systemd[61741]: Started Daily Cleanup of User's Temporary Directories. Feb 23 03:04:27 localhost systemd[61741]: Reached target Paths. Feb 23 03:04:27 localhost systemd[61741]: Reached target Timers. Feb 23 03:04:27 localhost podman[61760]: 2026-02-23 08:04:27.58368968 +0000 UTC m=+0.124094840 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T22:10:15Z, container_name=collectd, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1766032510, com.redhat.component=openstack-collectd-container) Feb 23 03:04:27 localhost systemd[61741]: Starting D-Bus User Message Bus Socket... Feb 23 03:04:27 localhost systemd[61741]: Starting Create User's Volatile Files and Directories... Feb 23 03:04:27 localhost podman[61760]: unhealthy Feb 23 03:04:27 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:04:27 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Failed with result 'exit-code'. Feb 23 03:04:27 localhost systemd[61741]: Finished Create User's Volatile Files and Directories. Feb 23 03:04:27 localhost podman[61801]: 2026-02-23 08:04:27.598353507 +0000 UTC m=+0.090822557 container died b9ff61d4de113cca1d2d08aa52599247bf501ac3655f037bace36a29bd599eb7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 23 03:04:27 localhost systemd[61741]: Listening on D-Bus User Message Bus Socket. Feb 23 03:04:27 localhost systemd[61741]: Reached target Sockets. Feb 23 03:04:27 localhost systemd[61741]: Reached target Basic System. Feb 23 03:04:27 localhost systemd[61741]: Reached target Main User Target. Feb 23 03:04:27 localhost systemd[61741]: Startup finished in 144ms. Feb 23 03:04:27 localhost systemd[1]: Started User Manager for UID 0. Feb 23 03:04:27 localhost systemd[1]: Started Session c1 of User root. Feb 23 03:04:27 localhost systemd[1]: Started Session c2 of User root. Feb 23 03:04:27 localhost podman[61761]: 2026-02-23 08:04:27.664730539 +0000 UTC m=+0.202821350 container died 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-rsyslog, version=17.1.13, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, tcib_managed=true) Feb 23 03:04:27 localhost systemd[1]: session-c2.scope: Deactivated successfully. Feb 23 03:04:27 localhost systemd[1]: session-c1.scope: Deactivated successfully. Feb 23 03:04:27 localhost podman[61806]: 2026-02-23 08:04:27.694389362 +0000 UTC m=+0.176800557 container cleanup b9ff61d4de113cca1d2d08aa52599247bf501ac3655f037bace36a29bd599eb7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, version=17.1.13, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_init_log, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:04:27 localhost systemd[1]: libpod-conmon-b9ff61d4de113cca1d2d08aa52599247bf501ac3655f037bace36a29bd599eb7.scope: Deactivated successfully. Feb 23 03:04:27 localhost podman[61761]: 2026-02-23 08:04:27.83975919 +0000 UTC m=+0.377849991 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, release=1766032510, name=rhosp-rhel9/openstack-rsyslog, config_id=tripleo_step3, batch=17.1_20260112.1, build-date=2026-01-12T22:10:09Z, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Feb 23 03:04:27 localhost systemd[1]: libpod-conmon-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18.scope: Deactivated successfully. Feb 23 03:04:27 localhost podman[61968]: 2026-02-23 08:04:27.921400467 +0000 UTC m=+0.062023570 container create 6e90a2d81f48f74065f9647e39f4f6a4d49a769228de6b6f7b0d1b32943c83f9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-libvirt, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 03:04:27 localhost systemd[1]: Started libpod-conmon-6e90a2d81f48f74065f9647e39f4f6a4d49a769228de6b6f7b0d1b32943c83f9.scope. Feb 23 03:04:27 localhost systemd[1]: Started libcrun container. Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf88b2818e1f9a11fb9fd8baf45bbb8189e63ee5a93830a36886c8def5f28b71/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf88b2818e1f9a11fb9fd8baf45bbb8189e63ee5a93830a36886c8def5f28b71/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf88b2818e1f9a11fb9fd8baf45bbb8189e63ee5a93830a36886c8def5f28b71/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf88b2818e1f9a11fb9fd8baf45bbb8189e63ee5a93830a36886c8def5f28b71/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:27 localhost podman[61968]: 2026-02-23 08:04:27.886756242 +0000 UTC m=+0.027379425 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:27 localhost podman[61968]: 2026-02-23 08:04:27.993361449 +0000 UTC m=+0.133984542 container init 6e90a2d81f48f74065f9647e39f4f6a4d49a769228de6b6f7b0d1b32943c83f9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:04:28 localhost podman[61968]: 2026-02-23 08:04:28.004323063 +0000 UTC m=+0.144946196 container start 6e90a2d81f48f74065f9647e39f4f6a4d49a769228de6b6f7b0d1b32943c83f9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 03:04:28 localhost podman[62024]: 2026-02-23 08:04:28.199476008 +0000 UTC m=+0.096804560 container create 324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_virtsecretd, release=1766032510, build-date=2026-01-12T23:31:49Z, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true) Feb 23 03:04:28 localhost systemd[1]: Started libpod-conmon-324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26.scope. Feb 23 03:04:28 localhost podman[62024]: 2026-02-23 08:04:28.14735159 +0000 UTC m=+0.044680182 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:28 localhost systemd[1]: Started libcrun container. Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd7ed36ffe690aa6d37e7d89968f7ddc4fb4fc144d90fb2ee3732e0e8b1e009/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd7ed36ffe690aa6d37e7d89968f7ddc4fb4fc144d90fb2ee3732e0e8b1e009/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd7ed36ffe690aa6d37e7d89968f7ddc4fb4fc144d90fb2ee3732e0e8b1e009/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd7ed36ffe690aa6d37e7d89968f7ddc4fb4fc144d90fb2ee3732e0e8b1e009/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd7ed36ffe690aa6d37e7d89968f7ddc4fb4fc144d90fb2ee3732e0e8b1e009/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd7ed36ffe690aa6d37e7d89968f7ddc4fb4fc144d90fb2ee3732e0e8b1e009/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd7ed36ffe690aa6d37e7d89968f7ddc4fb4fc144d90fb2ee3732e0e8b1e009/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost podman[62024]: 2026-02-23 08:04:28.269737328 +0000 UTC m=+0.167065880 container init 324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step3, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, container_name=nova_virtsecretd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:04:28 localhost podman[62024]: 2026-02-23 08:04:28.280844137 +0000 UTC m=+0.178172679 container start 324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, version=17.1.13, batch=17.1_20260112.1, container_name=nova_virtsecretd, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git) Feb 23 03:04:28 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8e86b11aed37635c57249fefb951044 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:28 localhost systemd[1]: var-lib-containers-storage-overlay-2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247-merged.mount: Deactivated successfully. Feb 23 03:04:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18-userdata-shm.mount: Deactivated successfully. Feb 23 03:04:28 localhost systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring. Feb 23 03:04:28 localhost systemd[1]: Started Session c3 of User root. Feb 23 03:04:28 localhost systemd[1]: session-c3.scope: Deactivated successfully. Feb 23 03:04:28 localhost podman[62160]: 2026-02-23 08:04:28.707245005 +0000 UTC m=+0.067538768 container create 50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, version=17.1.13, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:04:28 localhost podman[62161]: 2026-02-23 08:04:28.729497473 +0000 UTC m=+0.085698092 container create 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1) Feb 23 03:04:28 localhost systemd[1]: Started libpod-conmon-50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496.scope. Feb 23 03:04:28 localhost systemd[1]: Started libpod-conmon-828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.scope. Feb 23 03:04:28 localhost systemd[1]: Started libcrun container. Feb 23 03:04:28 localhost systemd[1]: Started libcrun container. Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7ff2e7cc809310771ac3fc05b2a8f4a3ab1e20a385f107124aff4afabfa04a/merged/etc/target supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7ff2e7cc809310771ac3fc05b2a8f4a3ab1e20a385f107124aff4afabfa04a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2aa44f25d5aa8bbb54bb02b77fdbdfb05a4397b4efc62b47a02ba2f31ca967/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2aa44f25d5aa8bbb54bb02b77fdbdfb05a4397b4efc62b47a02ba2f31ca967/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2aa44f25d5aa8bbb54bb02b77fdbdfb05a4397b4efc62b47a02ba2f31ca967/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2aa44f25d5aa8bbb54bb02b77fdbdfb05a4397b4efc62b47a02ba2f31ca967/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2aa44f25d5aa8bbb54bb02b77fdbdfb05a4397b4efc62b47a02ba2f31ca967/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2aa44f25d5aa8bbb54bb02b77fdbdfb05a4397b4efc62b47a02ba2f31ca967/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost podman[62160]: 2026-02-23 08:04:28.671577118 +0000 UTC m=+0.031870921 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2aa44f25d5aa8bbb54bb02b77fdbdfb05a4397b4efc62b47a02ba2f31ca967/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:28 localhost podman[62161]: 2026-02-23 08:04:28.676840859 +0000 UTC m=+0.033041528 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 23 03:04:28 localhost podman[62160]: 2026-02-23 08:04:28.777040091 +0000 UTC m=+0.137333874 container init 50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, release=1766032510, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, container_name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, config_id=tripleo_step3, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., build-date=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=) Feb 23 03:04:28 localhost podman[62160]: 2026-02-23 08:04:28.783798407 +0000 UTC m=+0.144092150 container start 50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13) Feb 23 03:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:04:28 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8e86b11aed37635c57249fefb951044 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:28 localhost podman[62161]: 2026-02-23 08:04:28.78948714 +0000 UTC m=+0.145687759 container init 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z) Feb 23 03:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:04:28 localhost podman[62161]: 2026-02-23 08:04:28.813007997 +0000 UTC m=+0.169208626 container start 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5) Feb 23 03:04:28 localhost systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring. Feb 23 03:04:28 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=a2261a69f76ac41646722c019ecc270e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 23 03:04:28 localhost systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring. Feb 23 03:04:28 localhost systemd[1]: Started Session c4 of User root. Feb 23 03:04:28 localhost systemd[1]: Started Session c5 of User root. Feb 23 03:04:28 localhost systemd[1]: session-c5.scope: Deactivated successfully. Feb 23 03:04:28 localhost kernel: Loading iSCSI transport class v2.0-870. Feb 23 03:04:28 localhost systemd[1]: session-c4.scope: Deactivated successfully. Feb 23 03:04:28 localhost podman[62210]: 2026-02-23 08:04:28.893601291 +0000 UTC m=+0.072114087 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git) Feb 23 03:04:28 localhost podman[62210]: 2026-02-23 08:04:28.976885689 +0000 UTC m=+0.155398485 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=iscsid, version=17.1.13, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Feb 23 03:04:28 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:04:29 localhost podman[62337]: 2026-02-23 08:04:29.260704334 +0000 UTC m=+0.088130335 container create 2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_id=tripleo_step3, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, release=1766032510, container_name=nova_virtstoraged, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:04:29 localhost systemd[1]: Started libpod-conmon-2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc.scope. Feb 23 03:04:29 localhost podman[62337]: 2026-02-23 08:04:29.205672388 +0000 UTC m=+0.033098419 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:29 localhost systemd[1]: Started libcrun container. Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3971ccf5a3d6277b5f480b29d3cf6f87cd8292b24c0c7a3df165e34155d06d3/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3971ccf5a3d6277b5f480b29d3cf6f87cd8292b24c0c7a3df165e34155d06d3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3971ccf5a3d6277b5f480b29d3cf6f87cd8292b24c0c7a3df165e34155d06d3/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3971ccf5a3d6277b5f480b29d3cf6f87cd8292b24c0c7a3df165e34155d06d3/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3971ccf5a3d6277b5f480b29d3cf6f87cd8292b24c0c7a3df165e34155d06d3/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3971ccf5a3d6277b5f480b29d3cf6f87cd8292b24c0c7a3df165e34155d06d3/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3971ccf5a3d6277b5f480b29d3cf6f87cd8292b24c0c7a3df165e34155d06d3/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost podman[62337]: 2026-02-23 08:04:29.322027732 +0000 UTC m=+0.149453743 container init 2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, version=17.1.13, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtstoraged) Feb 23 03:04:29 localhost podman[62337]: 2026-02-23 08:04:29.327996083 +0000 UTC m=+0.155422094 container start 2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, container_name=nova_virtstoraged, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, release=1766032510, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:31:49Z, build-date=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=) Feb 23 03:04:29 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8e86b11aed37635c57249fefb951044 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:29 localhost systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring. Feb 23 03:04:29 localhost systemd[1]: Started Session c6 of User root. Feb 23 03:04:29 localhost systemd[1]: session-c6.scope: Deactivated successfully. Feb 23 03:04:29 localhost podman[62439]: 2026-02-23 08:04:29.680734438 +0000 UTC m=+0.073369575 container create d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, container_name=nova_virtqemud, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:31:49Z) Feb 23 03:04:29 localhost systemd[1]: Started libpod-conmon-d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04.scope. Feb 23 03:04:29 localhost podman[62439]: 2026-02-23 08:04:29.636601215 +0000 UTC m=+0.029236382 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:29 localhost systemd[1]: Started libcrun container. Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad754dc0ccbca5f94297c9f7b89ba922c9a3f604bb7e57f1a8f4658470b198ae/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad754dc0ccbca5f94297c9f7b89ba922c9a3f604bb7e57f1a8f4658470b198ae/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad754dc0ccbca5f94297c9f7b89ba922c9a3f604bb7e57f1a8f4658470b198ae/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad754dc0ccbca5f94297c9f7b89ba922c9a3f604bb7e57f1a8f4658470b198ae/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad754dc0ccbca5f94297c9f7b89ba922c9a3f604bb7e57f1a8f4658470b198ae/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad754dc0ccbca5f94297c9f7b89ba922c9a3f604bb7e57f1a8f4658470b198ae/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad754dc0ccbca5f94297c9f7b89ba922c9a3f604bb7e57f1a8f4658470b198ae/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad754dc0ccbca5f94297c9f7b89ba922c9a3f604bb7e57f1a8f4658470b198ae/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:29 localhost podman[62439]: 2026-02-23 08:04:29.748806003 +0000 UTC m=+0.141441150 container init d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, version=17.1.13, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20260112.1, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:04:29 localhost podman[62439]: 2026-02-23 08:04:29.759402365 +0000 UTC m=+0.152037512 container start d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, org.opencontainers.image.created=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, release=1766032510, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, build-date=2026-01-12T23:31:49Z, container_name=nova_virtqemud, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5) Feb 23 03:04:29 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8e86b11aed37635c57249fefb951044 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:29 localhost systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring. Feb 23 03:04:29 localhost systemd[1]: Started Session c7 of User root. Feb 23 03:04:29 localhost systemd[1]: session-c7.scope: Deactivated successfully. Feb 23 03:04:30 localhost podman[62544]: 2026-02-23 08:04:30.177008246 +0000 UTC m=+0.073309705 container create a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, container_name=nova_virtproxyd) Feb 23 03:04:30 localhost systemd[1]: Started libpod-conmon-a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92.scope. Feb 23 03:04:30 localhost podman[62544]: 2026-02-23 08:04:30.131147959 +0000 UTC m=+0.027449438 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:30 localhost systemd[1]: Started libcrun container. Feb 23 03:04:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f754c62077bf37caa8ac647e3f2dd870b797112a74bfb7c91c34f0be7af204/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f754c62077bf37caa8ac647e3f2dd870b797112a74bfb7c91c34f0be7af204/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f754c62077bf37caa8ac647e3f2dd870b797112a74bfb7c91c34f0be7af204/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f754c62077bf37caa8ac647e3f2dd870b797112a74bfb7c91c34f0be7af204/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f754c62077bf37caa8ac647e3f2dd870b797112a74bfb7c91c34f0be7af204/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f754c62077bf37caa8ac647e3f2dd870b797112a74bfb7c91c34f0be7af204/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f754c62077bf37caa8ac647e3f2dd870b797112a74bfb7c91c34f0be7af204/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:30 localhost podman[62544]: 2026-02-23 08:04:30.245583265 +0000 UTC m=+0.141884724 container init a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, container_name=nova_virtproxyd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 03:04:30 localhost podman[62544]: 2026-02-23 08:04:30.255098654 +0000 UTC m=+0.151400103 container start a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, name=rhosp-rhel9/openstack-nova-libvirt, tcib_managed=true, distribution-scope=public, vcs-type=git, build-date=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.13, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, container_name=nova_virtproxyd) Feb 23 03:04:30 localhost python3[61460]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8e86b11aed37635c57249fefb951044 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:04:30 localhost systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring. Feb 23 03:04:30 localhost systemd[1]: Started Session c8 of User root. Feb 23 03:04:30 localhost systemd[1]: session-c8.scope: Deactivated successfully. Feb 23 03:04:30 localhost python3[62624]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:31 localhost python3[62640]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:31 localhost python3[62656]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:31 localhost python3[62672]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:31 localhost python3[62688]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:32 localhost python3[62704]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:32 localhost python3[62720]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:32 localhost python3[62736]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:32 localhost python3[62752]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:33 localhost python3[62768]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:33 localhost python3[62784]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:33 localhost python3[62800]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:33 localhost python3[62816]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:34 localhost python3[62832]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:34 localhost python3[62848]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:34 localhost python3[62864]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:04:35 localhost systemd[1]: tmp-crun.pXMgE2.mount: Deactivated successfully. Feb 23 03:04:35 localhost podman[62881]: 2026-02-23 08:04:35.011683276 +0000 UTC m=+0.102988778 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:04:35 localhost python3[62880]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:35 localhost podman[62881]: 2026-02-23 08:04:35.200409715 +0000 UTC m=+0.291715207 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, release=1766032510, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, tcib_managed=true) Feb 23 03:04:35 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:04:35 localhost python3[62923]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:04:35 localhost python3[62987]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833875.3801749-101030-11063582548766/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:36 localhost python3[63016]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833875.3801749-101030-11063582548766/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:36 localhost python3[63045]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833875.3801749-101030-11063582548766/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:37 localhost python3[63074]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833875.3801749-101030-11063582548766/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:38 localhost python3[63103]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833875.3801749-101030-11063582548766/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:38 localhost python3[63132]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833875.3801749-101030-11063582548766/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:39 localhost python3[63161]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833875.3801749-101030-11063582548766/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:39 localhost python3[63190]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833875.3801749-101030-11063582548766/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:40 localhost python3[63219]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771833875.3801749-101030-11063582548766/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:40 localhost python3[63235]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 03:04:40 localhost systemd[1]: Reloading. Feb 23 03:04:40 localhost systemd-sysv-generator[63262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:40 localhost systemd-rc-local-generator[63258]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:40 localhost systemd[1]: Stopping User Manager for UID 0... Feb 23 03:04:40 localhost systemd[61741]: Activating special unit Exit the Session... Feb 23 03:04:40 localhost systemd[61741]: Stopped target Main User Target. Feb 23 03:04:40 localhost systemd[61741]: Stopped target Basic System. Feb 23 03:04:40 localhost systemd[61741]: Stopped target Paths. Feb 23 03:04:40 localhost systemd[61741]: Stopped target Sockets. Feb 23 03:04:40 localhost systemd[61741]: Stopped target Timers. Feb 23 03:04:40 localhost systemd[61741]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 03:04:40 localhost systemd[61741]: Closed D-Bus User Message Bus Socket. Feb 23 03:04:40 localhost systemd[61741]: Stopped Create User's Volatile Files and Directories. Feb 23 03:04:40 localhost systemd[61741]: Removed slice User Application Slice. Feb 23 03:04:40 localhost systemd[61741]: Reached target Shutdown. Feb 23 03:04:40 localhost systemd[61741]: Finished Exit the Session. Feb 23 03:04:40 localhost systemd[61741]: Reached target Exit the Session. Feb 23 03:04:40 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 23 03:04:40 localhost systemd[1]: Stopped User Manager for UID 0. Feb 23 03:04:40 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 23 03:04:40 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 23 03:04:40 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 23 03:04:40 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 23 03:04:40 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 23 03:04:41 localhost python3[63288]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:41 localhost systemd[1]: Reloading. Feb 23 03:04:41 localhost systemd-sysv-generator[63320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:41 localhost systemd-rc-local-generator[63315]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:41 localhost systemd[1]: Starting collectd container... Feb 23 03:04:41 localhost systemd[1]: Started collectd container. Feb 23 03:04:42 localhost python3[63356]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:42 localhost systemd[1]: Reloading. Feb 23 03:04:42 localhost systemd-sysv-generator[63385]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:42 localhost systemd-rc-local-generator[63380]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:42 localhost systemd[1]: Starting iscsid container... Feb 23 03:04:42 localhost systemd[1]: Started iscsid container. Feb 23 03:04:43 localhost python3[63423]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:43 localhost systemd[1]: Reloading. Feb 23 03:04:43 localhost systemd-rc-local-generator[63451]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:43 localhost systemd-sysv-generator[63457]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:43 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Feb 23 03:04:43 localhost systemd[1]: Started nova_virtlogd_wrapper container. Feb 23 03:04:44 localhost python3[63490]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:44 localhost systemd[1]: Reloading. Feb 23 03:04:44 localhost systemd-rc-local-generator[63514]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:44 localhost systemd-sysv-generator[63520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:44 localhost systemd[1]: Starting nova_virtnodedevd container... Feb 23 03:04:45 localhost tripleo-start-podman-container[63530]: Creating additional drop-in dependency for "nova_virtnodedevd" (50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496) Feb 23 03:04:45 localhost systemd[1]: Reloading. Feb 23 03:04:45 localhost systemd-rc-local-generator[63588]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:45 localhost systemd-sysv-generator[63591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:45 localhost systemd[1]: Started nova_virtnodedevd container. Feb 23 03:04:45 localhost python3[63616]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:45 localhost systemd[1]: Reloading. Feb 23 03:04:46 localhost systemd-sysv-generator[63645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:46 localhost systemd-rc-local-generator[63642]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:46 localhost sshd[63654]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:04:46 localhost systemd[1]: Starting nova_virtproxyd container... Feb 23 03:04:46 localhost tripleo-start-podman-container[63657]: Creating additional drop-in dependency for "nova_virtproxyd" (a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92) Feb 23 03:04:46 localhost systemd[1]: Reloading. Feb 23 03:04:46 localhost systemd-sysv-generator[63718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:46 localhost systemd-rc-local-generator[63713]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:46 localhost systemd[1]: Started nova_virtproxyd container. Feb 23 03:04:47 localhost python3[63741]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:47 localhost systemd[1]: Reloading. Feb 23 03:04:47 localhost systemd-sysv-generator[63769]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:47 localhost systemd-rc-local-generator[63766]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:47 localhost systemd[1]: Starting nova_virtqemud container... Feb 23 03:04:47 localhost tripleo-start-podman-container[63780]: Creating additional drop-in dependency for "nova_virtqemud" (d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04) Feb 23 03:04:47 localhost systemd[1]: Reloading. Feb 23 03:04:47 localhost systemd-sysv-generator[63842]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:47 localhost systemd-rc-local-generator[63837]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:48 localhost systemd[1]: Started nova_virtqemud container. Feb 23 03:04:48 localhost python3[63865]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:48 localhost systemd[1]: Reloading. Feb 23 03:04:49 localhost systemd-rc-local-generator[63889]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:49 localhost systemd-sysv-generator[63893]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:49 localhost systemd[1]: Starting nova_virtsecretd container... Feb 23 03:04:49 localhost tripleo-start-podman-container[63905]: Creating additional drop-in dependency for "nova_virtsecretd" (324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26) Feb 23 03:04:49 localhost systemd[1]: Reloading. Feb 23 03:04:49 localhost systemd-rc-local-generator[63957]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:49 localhost systemd-sysv-generator[63962]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:49 localhost systemd[1]: Started nova_virtsecretd container. Feb 23 03:04:50 localhost python3[63988]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:50 localhost systemd[1]: Reloading. Feb 23 03:04:50 localhost systemd-rc-local-generator[64013]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:50 localhost systemd-sysv-generator[64017]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:50 localhost systemd[1]: Starting nova_virtstoraged container... Feb 23 03:04:50 localhost tripleo-start-podman-container[64028]: Creating additional drop-in dependency for "nova_virtstoraged" (2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc) Feb 23 03:04:50 localhost systemd[1]: Reloading. Feb 23 03:04:51 localhost systemd-sysv-generator[64088]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:51 localhost systemd-rc-local-generator[64085]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:51 localhost systemd[1]: Started nova_virtstoraged container. Feb 23 03:04:51 localhost python3[64113]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:04:51 localhost systemd[1]: Reloading. Feb 23 03:04:51 localhost systemd-sysv-generator[64138]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:04:51 localhost systemd-rc-local-generator[64135]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:04:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:04:52 localhost systemd[1]: Starting rsyslog container... Feb 23 03:04:52 localhost systemd[1]: Started libcrun container. Feb 23 03:04:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:52 localhost podman[64153]: 2026-02-23 08:04:52.325256559 +0000 UTC m=+0.137596472 container init 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp-rhel9/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_id=tripleo_step3, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=rsyslog, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 rsyslog) Feb 23 03:04:52 localhost podman[64153]: 2026-02-23 08:04:52.337062518 +0000 UTC m=+0.149402421 container start 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, build-date=2026-01-12T22:10:09Z, name=rhosp-rhel9/openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:09Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, release=1766032510, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:04:52 localhost podman[64153]: rsyslog Feb 23 03:04:52 localhost systemd[1]: Started rsyslog container. Feb 23 03:04:52 localhost systemd[1]: libpod-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18.scope: Deactivated successfully. Feb 23 03:04:52 localhost podman[64184]: 2026-02-23 08:04:52.506953974 +0000 UTC m=+0.055553424 container died 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, container_name=rsyslog, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:04:52 localhost podman[64184]: 2026-02-23 08:04:52.530122119 +0000 UTC m=+0.078721529 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=rsyslog, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, version=17.1.13, release=1766032510, name=rhosp-rhel9/openstack-rsyslog, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, build-date=2026-01-12T22:10:09Z, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:04:52 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:04:52 localhost podman[64201]: 2026-02-23 08:04:52.613069205 +0000 UTC m=+0.056098999 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, container_name=rsyslog, distribution-scope=public, name=rhosp-rhel9/openstack-rsyslog, config_id=tripleo_step3, build-date=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Feb 23 03:04:52 localhost podman[64201]: rsyslog Feb 23 03:04:52 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 23 03:04:52 localhost python3[64227]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:52 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Feb 23 03:04:52 localhost systemd[1]: Stopped rsyslog container. Feb 23 03:04:52 localhost systemd[1]: Starting rsyslog container... Feb 23 03:04:53 localhost systemd[1]: Started libcrun container. Feb 23 03:04:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:53 localhost podman[64228]: 2026-02-23 08:04:53.017092193 +0000 UTC m=+0.093758567 container init 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, build-date=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, tcib_managed=true) Feb 23 03:04:53 localhost podman[64228]: 2026-02-23 08:04:53.025122628 +0000 UTC m=+0.101789012 container start 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T22:10:09Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, version=17.1.13, name=rhosp-rhel9/openstack-rsyslog, architecture=x86_64) Feb 23 03:04:53 localhost podman[64228]: rsyslog Feb 23 03:04:53 localhost systemd[1]: Started rsyslog container. Feb 23 03:04:53 localhost systemd[1]: libpod-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18.scope: Deactivated successfully. Feb 23 03:04:53 localhost podman[64251]: 2026-02-23 08:04:53.175455847 +0000 UTC m=+0.042359022 container died 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.13, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:09Z, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com) Feb 23 03:04:53 localhost podman[64251]: 2026-02-23 08:04:53.19918644 +0000 UTC m=+0.066089555 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:09Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, release=1766032510, build-date=2026-01-12T22:10:09Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp-rhel9/openstack-rsyslog, version=17.1.13) Feb 23 03:04:53 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:04:53 localhost systemd[1]: var-lib-containers-storage-overlay-2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247-merged.mount: Deactivated successfully. Feb 23 03:04:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18-userdata-shm.mount: Deactivated successfully. Feb 23 03:04:53 localhost podman[64265]: 2026-02-23 08:04:53.281130506 +0000 UTC m=+0.055011797 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=rsyslog, io.openshift.expose-services=, build-date=2026-01-12T22:10:09Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-rsyslog, architecture=x86_64, config_id=tripleo_step3, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, release=1766032510, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog) Feb 23 03:04:53 localhost podman[64265]: rsyslog Feb 23 03:04:53 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 23 03:04:53 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Feb 23 03:04:53 localhost systemd[1]: Stopped rsyslog container. Feb 23 03:04:53 localhost systemd[1]: Starting rsyslog container... Feb 23 03:04:53 localhost systemd[1]: Started libcrun container. Feb 23 03:04:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:53 localhost podman[64325]: 2026-02-23 08:04:53.530315736 +0000 UTC m=+0.106025250 container init 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, config_id=tripleo_step3, name=rhosp-rhel9/openstack-rsyslog, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, container_name=rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1) Feb 23 03:04:53 localhost podman[64325]: 2026-02-23 08:04:53.539614879 +0000 UTC m=+0.115324393 container start 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-type=git, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-rsyslog-container, build-date=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.created=2026-01-12T22:10:09Z, architecture=x86_64, name=rhosp-rhel9/openstack-rsyslog) Feb 23 03:04:53 localhost podman[64325]: rsyslog Feb 23 03:04:53 localhost systemd[1]: Started rsyslog container. Feb 23 03:04:53 localhost systemd[1]: libpod-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18.scope: Deactivated successfully. Feb 23 03:04:53 localhost podman[64361]: 2026-02-23 08:04:53.68802733 +0000 UTC m=+0.033419789 container died 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:09Z, build-date=2026-01-12T22:10:09Z, architecture=x86_64, name=rhosp-rhel9/openstack-rsyslog, vcs-type=git) Feb 23 03:04:53 localhost podman[64361]: 2026-02-23 08:04:53.712819715 +0000 UTC m=+0.058212104 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, build-date=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:09Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:04:53 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:04:53 localhost podman[64388]: 2026-02-23 08:04:53.806966674 +0000 UTC m=+0.056412041 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:09Z, distribution-scope=public, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:10:09Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, name=rhosp-rhel9/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog) Feb 23 03:04:53 localhost podman[64388]: rsyslog Feb 23 03:04:53 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 23 03:04:54 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Feb 23 03:04:54 localhost systemd[1]: Stopped rsyslog container. Feb 23 03:04:54 localhost systemd[1]: Starting rsyslog container... Feb 23 03:04:54 localhost systemd[1]: tmp-crun.pLJXp6.mount: Deactivated successfully. Feb 23 03:04:54 localhost systemd[1]: Started libcrun container. Feb 23 03:04:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:54 localhost podman[64431]: 2026-02-23 08:04:54.299980492 +0000 UTC m=+0.123371819 container init 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, version=17.1.13, tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-rsyslog-container, release=1766032510) Feb 23 03:04:54 localhost podman[64431]: 2026-02-23 08:04:54.315393811 +0000 UTC m=+0.138785138 container start 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, vendor=Red Hat, Inc., container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp-rhel9/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:09Z, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:09Z) Feb 23 03:04:54 localhost podman[64431]: rsyslog Feb 23 03:04:54 localhost systemd[1]: Started rsyslog container. Feb 23 03:04:54 localhost systemd[1]: libpod-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18.scope: Deactivated successfully. Feb 23 03:04:54 localhost python3[64457]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005626465 step=3 update_config_hash_only=False Feb 23 03:04:54 localhost podman[64469]: 2026-02-23 08:04:54.449739543 +0000 UTC m=+0.034834102 container died 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, release=1766032510, version=17.1.13, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2026-01-12T22:10:09Z, name=rhosp-rhel9/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:04:54 localhost podman[64469]: 2026-02-23 08:04:54.47197145 +0000 UTC m=+0.057065969 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_id=tripleo_step3, release=1766032510, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, managed_by=tripleo_ansible, container_name=rsyslog, url=https://www.redhat.com, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, version=17.1.13, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:04:54 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:04:54 localhost podman[64482]: 2026-02-23 08:04:54.536087353 +0000 UTC m=+0.042049381 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=rsyslog, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:04:54 localhost podman[64482]: rsyslog Feb 23 03:04:54 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 23 03:04:54 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Feb 23 03:04:54 localhost systemd[1]: Stopped rsyslog container. Feb 23 03:04:54 localhost systemd[1]: Starting rsyslog container... Feb 23 03:04:54 localhost systemd[1]: Started libcrun container. Feb 23 03:04:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 23 03:04:54 localhost podman[64495]: 2026-02-23 08:04:54.774589408 +0000 UTC m=+0.101457721 container init 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, name=rhosp-rhel9/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:09Z, container_name=rsyslog, distribution-scope=public, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, batch=17.1_20260112.1) Feb 23 03:04:54 localhost podman[64495]: 2026-02-23 08:04:54.783897602 +0000 UTC m=+0.110765935 container start 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:09Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-rsyslog, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1766032510, build-date=2026-01-12T22:10:09Z, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com) Feb 23 03:04:54 localhost podman[64495]: rsyslog Feb 23 03:04:54 localhost systemd[1]: Started rsyslog container. Feb 23 03:04:54 localhost systemd[1]: libpod-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18.scope: Deactivated successfully. Feb 23 03:04:54 localhost podman[64535]: 2026-02-23 08:04:54.945308249 +0000 UTC m=+0.053371527 container died 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp-rhel9/openstack-rsyslog, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=rsyslog, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public) Feb 23 03:04:54 localhost podman[64535]: 2026-02-23 08:04:54.971996351 +0000 UTC m=+0.080059589 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:09Z, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, release=1766032510, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, name=rhosp-rhel9/openstack-rsyslog, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:09Z) Feb 23 03:04:54 localhost python3[64533]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:04:54 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:04:55 localhost podman[64548]: 2026-02-23 08:04:55.059692153 +0000 UTC m=+0.060044130 container cleanup 37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:09Z, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eeb65e0a12c94af5b1e666d55df1d6ee'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc.) Feb 23 03:04:55 localhost podman[64548]: rsyslog Feb 23 03:04:55 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 23 03:04:55 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Feb 23 03:04:55 localhost systemd[1]: Stopped rsyslog container. Feb 23 03:04:55 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Feb 23 03:04:55 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 23 03:04:55 localhost systemd[1]: Failed to start rsyslog container. Feb 23 03:04:55 localhost systemd[1]: tmp-crun.c2m9re.mount: Deactivated successfully. Feb 23 03:04:55 localhost systemd[1]: var-lib-containers-storage-overlay-2c79b6830874cb444625739070b7b53888237db094dedda599c30d49ec5bc247-merged.mount: Deactivated successfully. Feb 23 03:04:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37f7419acd23e9af05a35fca45f1c417540e960dddf8094f0a3198f438618a18-userdata-shm.mount: Deactivated successfully. Feb 23 03:04:55 localhost python3[64576]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 23 03:04:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:04:57 localhost podman[64577]: 2026-02-23 08:04:57.988179999 +0000 UTC m=+0.063679621 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:04:58 localhost podman[64577]: 2026-02-23 08:04:58.01939701 +0000 UTC m=+0.094896682 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.13, build-date=2026-01-12T22:10:15Z, container_name=collectd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:04:58 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:04:59 localhost systemd[1]: tmp-crun.5yhJ8c.mount: Deactivated successfully. Feb 23 03:04:59 localhost podman[64597]: 2026-02-23 08:04:59.995559437 +0000 UTC m=+0.070942173 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 23 03:05:00 localhost podman[64597]: 2026-02-23 08:05:00.004287303 +0000 UTC m=+0.079670069 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3) Feb 23 03:05:00 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:05:06 localhost podman[64677]: 2026-02-23 08:05:06.007973761 +0000 UTC m=+0.080837933 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, container_name=metrics_qdr, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Feb 23 03:05:06 localhost podman[64677]: 2026-02-23 08:05:06.174194644 +0000 UTC m=+0.247058786 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 23 03:05:06 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:05:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:05:28 localhost systemd[1]: tmp-crun.peBBsY.mount: Deactivated successfully. Feb 23 03:05:28 localhost podman[64721]: 2026-02-23 08:05:28.985673312 +0000 UTC m=+0.068070304 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:05:28 localhost podman[64721]: 2026-02-23 08:05:28.995876493 +0000 UTC m=+0.078273485 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=collectd, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3) Feb 23 03:05:29 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:05:30 localhost podman[64741]: 2026-02-23 08:05:30.995970519 +0000 UTC m=+0.071352045 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container) Feb 23 03:05:31 localhost podman[64741]: 2026-02-23 08:05:31.031112929 +0000 UTC m=+0.106494465 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1766032510, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5) Feb 23 03:05:31 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:05:34 localhost sshd[64759]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:05:37 localhost systemd[1]: tmp-crun.HKxpcp.mount: Deactivated successfully. Feb 23 03:05:37 localhost podman[64761]: 2026-02-23 08:05:37.008244221 +0000 UTC m=+0.085035671 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 23 03:05:37 localhost podman[64761]: 2026-02-23 08:05:37.223808988 +0000 UTC m=+0.300600438 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:05:37 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:05:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:05:59 localhost systemd[1]: tmp-crun.wcA6I3.mount: Deactivated successfully. Feb 23 03:05:59 localhost podman[64790]: 2026-02-23 08:05:59.991979232 +0000 UTC m=+0.071962452 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_id=tripleo_step3, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:06:00 localhost podman[64790]: 2026-02-23 08:06:00.030794935 +0000 UTC m=+0.110778135 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, release=1766032510) Feb 23 03:06:00 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:06:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:06:01 localhost systemd[1]: tmp-crun.glrURy.mount: Deactivated successfully. Feb 23 03:06:02 localhost podman[64809]: 2026-02-23 08:06:01.999850195 +0000 UTC m=+0.076026307 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, container_name=iscsid) Feb 23 03:06:02 localhost podman[64809]: 2026-02-23 08:06:02.011766877 +0000 UTC m=+0.087942999 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Feb 23 03:06:02 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:06:07 localhost podman[64889]: 2026-02-23 08:06:07.992135056 +0000 UTC m=+0.073088027 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, config_id=tripleo_step1, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, vcs-type=git) Feb 23 03:06:08 localhost podman[64889]: 2026-02-23 08:06:08.155201623 +0000 UTC m=+0.236154554 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step1, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5) Feb 23 03:06:08 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:06:24 localhost sshd[64934]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:06:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:06:31 localhost podman[64936]: 2026-02-23 08:06:30.998867098 +0000 UTC m=+0.074346646 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, build-date=2026-01-12T22:10:15Z, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:06:31 localhost podman[64936]: 2026-02-23 08:06:31.010580024 +0000 UTC m=+0.086059592 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Feb 23 03:06:31 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:06:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:06:33 localhost systemd[1]: tmp-crun.f38Sd6.mount: Deactivated successfully. Feb 23 03:06:33 localhost podman[64957]: 2026-02-23 08:06:33.003256603 +0000 UTC m=+0.079208013 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, tcib_managed=true, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.buildah.version=1.41.5) Feb 23 03:06:33 localhost podman[64957]: 2026-02-23 08:06:33.015831977 +0000 UTC m=+0.091783317 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., tcib_managed=true, build-date=2026-01-12T22:34:43Z) Feb 23 03:06:33 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:06:39 localhost podman[64976]: 2026-02-23 08:06:39.011230684 +0000 UTC m=+0.082611948 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5, managed_by=tripleo_ansible) Feb 23 03:06:39 localhost podman[64976]: 2026-02-23 08:06:39.231042839 +0000 UTC m=+0.302424063 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, config_id=tripleo_step1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.13) Feb 23 03:06:39 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:07:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:07:02 localhost systemd[1]: tmp-crun.U1L4QR.mount: Deactivated successfully. Feb 23 03:07:02 localhost podman[65006]: 2026-02-23 08:07:02.009570439 +0000 UTC m=+0.089816387 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, container_name=collectd, io.buildah.version=1.41.5, url=https://www.redhat.com) Feb 23 03:07:02 localhost podman[65006]: 2026-02-23 08:07:02.01909521 +0000 UTC m=+0.099341198 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 23 03:07:02 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:07:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:07:03 localhost podman[65027]: 2026-02-23 08:07:03.996452643 +0000 UTC m=+0.074426869 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, batch=17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid) Feb 23 03:07:04 localhost podman[65027]: 2026-02-23 08:07:04.003871838 +0000 UTC m=+0.081846064 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:07:04 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:07:10 localhost podman[65109]: 2026-02-23 08:07:10.003099672 +0000 UTC m=+0.080500563 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:07:10 localhost podman[65109]: 2026-02-23 08:07:10.20491809 +0000 UTC m=+0.282319041 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:07:10 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:07:13 localhost sshd[65153]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:07:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:07:33 localhost systemd[1]: tmp-crun.P8f6Dj.mount: Deactivated successfully. Feb 23 03:07:33 localhost podman[65155]: 2026-02-23 08:07:33.025248935 +0000 UTC m=+0.099143890 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.5, distribution-scope=public) Feb 23 03:07:33 localhost podman[65155]: 2026-02-23 08:07:33.060591262 +0000 UTC m=+0.134486257 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, container_name=collectd, url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:07:33 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:07:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:07:35 localhost podman[65176]: 2026-02-23 08:07:35.002239417 +0000 UTC m=+0.076888604 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, container_name=iscsid, vendor=Red Hat, Inc., batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:07:35 localhost podman[65176]: 2026-02-23 08:07:35.016889013 +0000 UTC m=+0.091538190 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Feb 23 03:07:35 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:07:41 localhost podman[65196]: 2026-02-23 08:07:41.014470104 +0000 UTC m=+0.087412864 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=) Feb 23 03:07:41 localhost podman[65196]: 2026-02-23 08:07:41.216839642 +0000 UTC m=+0.289782302 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, config_id=tripleo_step1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:07:41 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:08:02 localhost sshd[65225]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:08:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:08:04 localhost systemd[1]: tmp-crun.nflQ7z.mount: Deactivated successfully. Feb 23 03:08:04 localhost podman[65227]: 2026-02-23 08:08:04.010178179 +0000 UTC m=+0.083549145 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 23 03:08:04 localhost podman[65227]: 2026-02-23 08:08:04.024811087 +0000 UTC m=+0.098182073 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, version=17.1.13, vcs-type=git, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:08:04 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:08:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:08:05 localhost podman[65248]: 2026-02-23 08:08:05.987895015 +0000 UTC m=+0.064434652 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, version=17.1.13, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public) Feb 23 03:08:05 localhost podman[65248]: 2026-02-23 08:08:05.997572521 +0000 UTC m=+0.074112228 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1) Feb 23 03:08:06 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:08:11 localhost systemd[1]: tmp-crun.pRVJHI.mount: Deactivated successfully. Feb 23 03:08:12 localhost podman[65267]: 2026-02-23 08:08:12.002109281 +0000 UTC m=+0.081292647 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13, vcs-type=git, container_name=metrics_qdr, release=1766032510, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5) Feb 23 03:08:12 localhost podman[65267]: 2026-02-23 08:08:12.22181806 +0000 UTC m=+0.301001386 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 23 03:08:12 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:08:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:08:35 localhost podman[65422]: 2026-02-23 08:08:35.009486022 +0000 UTC m=+0.084107303 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:08:35 localhost podman[65422]: 2026-02-23 08:08:35.048193396 +0000 UTC m=+0.122814677 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 23 03:08:35 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:08:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:08:36 localhost podman[65442]: 2026-02-23 08:08:36.968286288 +0000 UTC m=+0.052419973 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13) Feb 23 03:08:37 localhost podman[65442]: 2026-02-23 08:08:37.002776064 +0000 UTC m=+0.086909719 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1) Feb 23 03:08:37 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:08:43 localhost podman[65461]: 2026-02-23 08:08:43.00586786 +0000 UTC m=+0.082224816 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, release=1766032510, container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 03:08:43 localhost podman[65461]: 2026-02-23 08:08:43.201878914 +0000 UTC m=+0.278235860 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.buildah.version=1.41.5, container_name=metrics_qdr, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:08:43 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:08:50 localhost python3[65537]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:08:50 localhost python3[65582]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834130.052515-107935-74480658678145/source _original_basename=tmpl5z1iqmr follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:08:50 localhost sshd[65583]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:08:52 localhost python3[65646]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:08:52 localhost python3[65689]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834131.9132178-108029-127967973437728/source _original_basename=tmp2d_doz49 follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:08:53 localhost python3[65751]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:08:53 localhost python3[65794]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834132.8280506-108078-38918881423759/source _original_basename=tmp3f8furr7 follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:08:54 localhost python3[65856]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:08:54 localhost python3[65899]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834133.7901993-108167-236523112007827/source _original_basename=tmpxj47itnw follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:08:55 localhost python3[65929]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 23 03:08:55 localhost systemd[1]: Reloading. Feb 23 03:08:55 localhost systemd-rc-local-generator[65952]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:08:55 localhost systemd-sysv-generator[65958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:08:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:08:55 localhost systemd[1]: Reloading. Feb 23 03:08:55 localhost systemd-sysv-generator[65997]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:08:55 localhost systemd-rc-local-generator[65992]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:08:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:08:56 localhost python3[66019]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:08:56 localhost systemd[1]: Reloading. Feb 23 03:08:56 localhost systemd-rc-local-generator[66044]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:08:56 localhost systemd-sysv-generator[66047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:08:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:08:56 localhost systemd[1]: Reloading. Feb 23 03:08:56 localhost systemd-sysv-generator[66087]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:08:56 localhost systemd-rc-local-generator[66081]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:08:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:08:56 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Feb 23 03:08:57 localhost python3[66110]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 03:08:57 localhost systemd[1]: Reloading. Feb 23 03:08:57 localhost systemd-rc-local-generator[66134]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:08:57 localhost systemd-sysv-generator[66138]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:08:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:08:57 localhost sshd[66147]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:08:58 localhost python3[66195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:08:58 localhost python3[66238]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834137.761347-108302-71741763154276/source _original_basename=tmpyni4zuyn follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:08:58 localhost python3[66268]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:08:58 localhost systemd[1]: Reloading. Feb 23 03:08:59 localhost systemd-rc-local-generator[66289]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:08:59 localhost systemd-sysv-generator[66293]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:08:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:08:59 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Feb 23 03:08:59 localhost python3[66323]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:09:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:09:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4695 writes, 21K keys, 4695 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4695 writes, 490 syncs, 9.58 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 214 writes, 468 keys, 214 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 214 writes, 107 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:09:01 localhost ansible-async_wrapper.py[66495]: Invoked with 137865080376 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834140.7507327-108383-88950384166415/AnsiballZ_command.py _ Feb 23 03:09:01 localhost ansible-async_wrapper.py[66498]: Starting module and watcher Feb 23 03:09:01 localhost ansible-async_wrapper.py[66498]: Start watching 66499 (3600) Feb 23 03:09:01 localhost ansible-async_wrapper.py[66499]: Start module (66499) Feb 23 03:09:01 localhost ansible-async_wrapper.py[66495]: Return async_wrapper task started. Feb 23 03:09:01 localhost python3[66519]: ansible-ansible.legacy.async_status Invoked with jid=137865080376.66495 mode=status _async_dir=/tmp/.ansible_async Feb 23 03:09:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:09:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4923 writes, 21K keys, 4923 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4923 writes, 558 syncs, 8.82 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 218 writes, 483 keys, 218 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s#012Interval WAL: 219 writes, 108 syncs, 2.03 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:09:04 localhost puppet-user[66516]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 03:09:04 localhost puppet-user[66516]: (file: /etc/puppet/hiera.yaml) Feb 23 03:09:04 localhost puppet-user[66516]: Warning: Undefined variable '::deploy_config_name'; Feb 23 03:09:04 localhost puppet-user[66516]: (file & line not available) Feb 23 03:09:05 localhost puppet-user[66516]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 03:09:05 localhost puppet-user[66516]: (file & line not available) Feb 23 03:09:05 localhost puppet-user[66516]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 23 03:09:05 localhost puppet-user[66516]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:09:05 localhost puppet-user[66516]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:09:05 localhost puppet-user[66516]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 23 03:09:05 localhost puppet-user[66516]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:09:05 localhost puppet-user[66516]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:09:05 localhost puppet-user[66516]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 23 03:09:05 localhost puppet-user[66516]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:09:05 localhost puppet-user[66516]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:09:05 localhost puppet-user[66516]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 23 03:09:05 localhost puppet-user[66516]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:09:05 localhost puppet-user[66516]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:09:05 localhost puppet-user[66516]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 23 03:09:05 localhost puppet-user[66516]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:09:05 localhost puppet-user[66516]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:09:05 localhost puppet-user[66516]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 23 03:09:05 localhost puppet-user[66516]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:09:05 localhost puppet-user[66516]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:09:05 localhost puppet-user[66516]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 23 03:09:05 localhost puppet-user[66516]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.21 seconds Feb 23 03:09:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:09:06 localhost podman[66637]: 2026-02-23 08:09:06.000104278 +0000 UTC m=+0.075882631 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, container_name=collectd, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:09:06 localhost podman[66637]: 2026-02-23 08:09:06.039877344 +0000 UTC m=+0.115655667 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:09:06 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:09:06 localhost ansible-async_wrapper.py[66498]: 66499 still running (3600) Feb 23 03:09:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:09:07 localhost podman[66657]: 2026-02-23 08:09:07.979930378 +0000 UTC m=+0.060341026 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.5, config_id=tripleo_step3, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, batch=17.1_20260112.1) Feb 23 03:09:08 localhost podman[66657]: 2026-02-23 08:09:08.018747235 +0000 UTC m=+0.099157853 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:09:08 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:09:11 localhost ansible-async_wrapper.py[66498]: 66499 still running (3595) Feb 23 03:09:11 localhost python3[66751]: ansible-ansible.legacy.async_status Invoked with jid=137865080376.66495 mode=status _async_dir=/tmp/.ansible_async Feb 23 03:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:09:13 localhost podman[66762]: 2026-02-23 08:09:13.702956251 +0000 UTC m=+0.072242180 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, config_id=tripleo_step1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git) Feb 23 03:09:13 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 03:09:13 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 03:09:13 localhost systemd[1]: Reloading. Feb 23 03:09:13 localhost podman[66762]: 2026-02-23 08:09:13.915029716 +0000 UTC m=+0.284315655 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, version=17.1.13, release=1766032510, config_id=tripleo_step1, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:09:14 localhost systemd-rc-local-generator[66910]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:14 localhost systemd-sysv-generator[66916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:14 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:09:14 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 03:09:14 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 03:09:14 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 03:09:14 localhost systemd[1]: man-db-cache-update.service: Consumed 1.006s CPU time. Feb 23 03:09:14 localhost systemd[1]: run-rc3ee14b806394154b77dea115127b1b5.service: Deactivated successfully. Feb 23 03:09:15 localhost puppet-user[66516]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Feb 23 03:09:15 localhost puppet-user[66516]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}ab7a496d28f12f7443647c03c346be9b21697d2db9b9e77484ced632e39ffc06' Feb 23 03:09:15 localhost puppet-user[66516]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Feb 23 03:09:15 localhost puppet-user[66516]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Feb 23 03:09:15 localhost puppet-user[66516]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Feb 23 03:09:15 localhost puppet-user[66516]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Feb 23 03:09:16 localhost ansible-async_wrapper.py[66498]: 66499 still running (3590) Feb 23 03:09:20 localhost puppet-user[66516]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Feb 23 03:09:20 localhost systemd[1]: Reloading. Feb 23 03:09:20 localhost systemd-sysv-generator[68100]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:20 localhost systemd-rc-local-generator[68096]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:21 localhost podman[68110]: Feb 23 03:09:21 localhost podman[68110]: 2026-02-23 08:09:21.041031339 +0000 UTC m=+0.082466293 container create 7a0e4fd43cd17b7bbe32aa5306632746ab8495896b601541b78c8743668e2448 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_benz, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, version=7) Feb 23 03:09:21 localhost podman[68110]: 2026-02-23 08:09:21.010028921 +0000 UTC m=+0.051463865 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 03:09:21 localhost systemd[1]: Started libpod-conmon-7a0e4fd43cd17b7bbe32aa5306632746ab8495896b601541b78c8743668e2448.scope. Feb 23 03:09:21 localhost systemd[1]: Started libcrun container. Feb 23 03:09:21 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Feb 23 03:09:21 localhost podman[68110]: 2026-02-23 08:09:21.196515933 +0000 UTC m=+0.237950837 container init 7a0e4fd43cd17b7bbe32aa5306632746ab8495896b601541b78c8743668e2448 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_benz, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1770267347, architecture=x86_64, distribution-scope=public, version=7, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, name=rhceph, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 03:09:21 localhost systemd[1]: tmp-crun.RlaYCH.mount: Deactivated successfully. Feb 23 03:09:21 localhost podman[68110]: 2026-02-23 08:09:21.211460051 +0000 UTC m=+0.252894995 container start 7a0e4fd43cd17b7bbe32aa5306632746ab8495896b601541b78c8743668e2448 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_benz, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True) Feb 23 03:09:21 localhost podman[68110]: 2026-02-23 08:09:21.212399869 +0000 UTC m=+0.253834813 container attach 7a0e4fd43cd17b7bbe32aa5306632746ab8495896b601541b78c8743668e2448 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_benz, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 03:09:21 localhost snmpd[68131]: Can't find directory of RPM packages Feb 23 03:09:21 localhost strange_benz[68128]: 167 167 Feb 23 03:09:21 localhost systemd[1]: libpod-7a0e4fd43cd17b7bbe32aa5306632746ab8495896b601541b78c8743668e2448.scope: Deactivated successfully. Feb 23 03:09:21 localhost podman[68110]: 2026-02-23 08:09:21.217258098 +0000 UTC m=+0.258693042 container died 7a0e4fd43cd17b7bbe32aa5306632746ab8495896b601541b78c8743668e2448 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_benz, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.expose-services=, release=1770267347, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_CLEAN=True, vcs-type=git, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 03:09:21 localhost snmpd[68131]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Feb 23 03:09:21 localhost ansible-async_wrapper.py[66498]: 66499 still running (3585) Feb 23 03:09:21 localhost podman[68134]: 2026-02-23 08:09:21.313860371 +0000 UTC m=+0.085311189 container remove 7a0e4fd43cd17b7bbe32aa5306632746ab8495896b601541b78c8743668e2448 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_benz, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, architecture=x86_64, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=) Feb 23 03:09:21 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Feb 23 03:09:21 localhost systemd[1]: libpod-conmon-7a0e4fd43cd17b7bbe32aa5306632746ab8495896b601541b78c8743668e2448.scope: Deactivated successfully. Feb 23 03:09:21 localhost systemd[1]: Reloading. Feb 23 03:09:21 localhost systemd-sysv-generator[68192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:21 localhost systemd-rc-local-generator[68187]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:21 localhost podman[68158]: Feb 23 03:09:21 localhost podman[68158]: 2026-02-23 08:09:21.550925781 +0000 UTC m=+0.088924961 container create 069a2b46500357c29fd2c8d0293dbf5ad712e343942a8d24d5f9363a4346d856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_ardinghelli, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, ceph=True, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_BRANCH=main) Feb 23 03:09:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:21 localhost podman[68158]: 2026-02-23 08:09:21.521298694 +0000 UTC m=+0.059297894 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 03:09:21 localhost systemd[1]: var-lib-containers-storage-overlay-15a1acc288ac1e7071afaf1968987fdc3a90792fe131e858853895c3b229d994-merged.mount: Deactivated successfully. Feb 23 03:09:21 localhost systemd[1]: Started libpod-conmon-069a2b46500357c29fd2c8d0293dbf5ad712e343942a8d24d5f9363a4346d856.scope. Feb 23 03:09:21 localhost systemd[1]: Started libcrun container. Feb 23 03:09:21 localhost systemd[1]: Reloading. Feb 23 03:09:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5971c693215a081349e276e88a80eb4e8f7abb61981527d5918ec681fe533d4/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5971c693215a081349e276e88a80eb4e8f7abb61981527d5918ec681fe533d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5971c693215a081349e276e88a80eb4e8f7abb61981527d5918ec681fe533d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:21 localhost podman[68158]: 2026-02-23 08:09:21.750180824 +0000 UTC m=+0.288180014 container init 069a2b46500357c29fd2c8d0293dbf5ad712e343942a8d24d5f9363a4346d856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_ardinghelli, io.openshift.tags=rhceph ceph, vcs-type=git, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , architecture=x86_64, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, version=7, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z) Feb 23 03:09:21 localhost podman[68158]: 2026-02-23 08:09:21.760328564 +0000 UTC m=+0.298327754 container start 069a2b46500357c29fd2c8d0293dbf5ad712e343942a8d24d5f9363a4346d856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_ardinghelli, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, GIT_CLEAN=True, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, ceph=True, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git) Feb 23 03:09:21 localhost podman[68158]: 2026-02-23 08:09:21.760999555 +0000 UTC m=+0.298998745 container attach 069a2b46500357c29fd2c8d0293dbf5ad712e343942a8d24d5f9363a4346d856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_ardinghelli, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.42.2, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, name=rhceph, maintainer=Guillaume Abrioux , release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Feb 23 03:09:21 localhost systemd-sysv-generator[68242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:21 localhost systemd-rc-local-generator[68236]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:22 localhost puppet-user[66516]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Feb 23 03:09:22 localhost puppet-user[66516]: Notice: Applied catalog in 16.78 seconds Feb 23 03:09:22 localhost puppet-user[66516]: Application: Feb 23 03:09:22 localhost puppet-user[66516]: Initial environment: production Feb 23 03:09:22 localhost puppet-user[66516]: Converged environment: production Feb 23 03:09:22 localhost puppet-user[66516]: Run mode: user Feb 23 03:09:22 localhost puppet-user[66516]: Changes: Feb 23 03:09:22 localhost puppet-user[66516]: Total: 8 Feb 23 03:09:22 localhost puppet-user[66516]: Events: Feb 23 03:09:22 localhost puppet-user[66516]: Success: 8 Feb 23 03:09:22 localhost puppet-user[66516]: Total: 8 Feb 23 03:09:22 localhost puppet-user[66516]: Resources: Feb 23 03:09:22 localhost puppet-user[66516]: Restarted: 1 Feb 23 03:09:22 localhost puppet-user[66516]: Changed: 8 Feb 23 03:09:22 localhost puppet-user[66516]: Out of sync: 8 Feb 23 03:09:22 localhost puppet-user[66516]: Total: 19 Feb 23 03:09:22 localhost puppet-user[66516]: Time: Feb 23 03:09:22 localhost puppet-user[66516]: Schedule: 0.00 Feb 23 03:09:22 localhost puppet-user[66516]: Augeas: 0.01 Feb 23 03:09:22 localhost puppet-user[66516]: File: 0.10 Feb 23 03:09:22 localhost puppet-user[66516]: Config retrieval: 0.31 Feb 23 03:09:22 localhost puppet-user[66516]: Service: 1.35 Feb 23 03:09:22 localhost puppet-user[66516]: Package: 10.09 Feb 23 03:09:22 localhost puppet-user[66516]: Transaction evaluation: 16.76 Feb 23 03:09:22 localhost puppet-user[66516]: Catalog application: 16.78 Feb 23 03:09:22 localhost puppet-user[66516]: Last run: 1771834162 Feb 23 03:09:22 localhost puppet-user[66516]: Exec: 5.07 Feb 23 03:09:22 localhost puppet-user[66516]: Filebucket: 0.00 Feb 23 03:09:22 localhost puppet-user[66516]: Total: 16.78 Feb 23 03:09:22 localhost puppet-user[66516]: Version: Feb 23 03:09:22 localhost puppet-user[66516]: Config: 1771834144 Feb 23 03:09:22 localhost puppet-user[66516]: Puppet: 7.10.0 Feb 23 03:09:22 localhost ansible-async_wrapper.py[66499]: Module complete (66499) Feb 23 03:09:22 localhost python3[68269]: ansible-ansible.legacy.async_status Invoked with jid=137865080376.66495 mode=status _async_dir=/tmp/.ansible_async Feb 23 03:09:22 localhost keen_ardinghelli[68209]: [ Feb 23 03:09:22 localhost keen_ardinghelli[68209]: { Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "available": false, Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "ceph_device": false, Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "lsm_data": {}, Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "lvs": [], Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "path": "/dev/sr0", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "rejected_reasons": [ Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "Insufficient space (<5GB)", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "Has a FileSystem" Feb 23 03:09:22 localhost keen_ardinghelli[68209]: ], Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "sys_api": { Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "actuators": null, Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "device_nodes": "sr0", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "human_readable_size": "482.00 KB", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "id_bus": "ata", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "model": "QEMU DVD-ROM", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "nr_requests": "2", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "partitions": {}, Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "path": "/dev/sr0", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "removable": "1", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "rev": "2.5+", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "ro": "0", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "rotational": "1", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "sas_address": "", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "sas_device_handle": "", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "scheduler_mode": "mq-deadline", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "sectors": 0, Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "sectorsize": "2048", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "size": 493568.0, Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "support_discard": "0", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "type": "disk", Feb 23 03:09:22 localhost keen_ardinghelli[68209]: "vendor": "QEMU" Feb 23 03:09:22 localhost keen_ardinghelli[68209]: } Feb 23 03:09:22 localhost keen_ardinghelli[68209]: } Feb 23 03:09:22 localhost keen_ardinghelli[68209]: ] Feb 23 03:09:22 localhost systemd[1]: libpod-069a2b46500357c29fd2c8d0293dbf5ad712e343942a8d24d5f9363a4346d856.scope: Deactivated successfully. Feb 23 03:09:22 localhost podman[68158]: 2026-02-23 08:09:22.745725175 +0000 UTC m=+1.283724335 container died 069a2b46500357c29fd2c8d0293dbf5ad712e343942a8d24d5f9363a4346d856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_ardinghelli, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, release=1770267347, architecture=x86_64, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Feb 23 03:09:22 localhost systemd[1]: var-lib-containers-storage-overlay-e5971c693215a081349e276e88a80eb4e8f7abb61981527d5918ec681fe533d4-merged.mount: Deactivated successfully. Feb 23 03:09:22 localhost podman[70101]: 2026-02-23 08:09:22.841709781 +0000 UTC m=+0.083558106 container remove 069a2b46500357c29fd2c8d0293dbf5ad712e343942a8d24d5f9363a4346d856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_ardinghelli, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, vendor=Red Hat, Inc., RELEASE=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 03:09:22 localhost systemd[1]: libpod-conmon-069a2b46500357c29fd2c8d0293dbf5ad712e343942a8d24d5f9363a4346d856.scope: Deactivated successfully. Feb 23 03:09:22 localhost python3[70100]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 03:09:23 localhost python3[70129]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:09:23 localhost python3[70179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:09:24 localhost python3[70197]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp91rl0t7c recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 03:09:24 localhost python3[70242]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:25 localhost python3[70345]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 23 03:09:26 localhost python3[70364]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:26 localhost ansible-async_wrapper.py[66498]: Done in kid B. Feb 23 03:09:27 localhost python3[70396]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:09:27 localhost python3[70446]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:09:28 localhost python3[70464]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:28 localhost python3[70526]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:09:28 localhost python3[70544]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:29 localhost python3[70606]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:09:29 localhost python3[70624]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:30 localhost python3[70686]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:09:30 localhost python3[70704]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:31 localhost python3[70734]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:09:31 localhost systemd[1]: Reloading. Feb 23 03:09:31 localhost systemd-sysv-generator[70762]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:31 localhost systemd-rc-local-generator[70758]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:31 localhost python3[70820]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:09:32 localhost python3[70838]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:32 localhost python3[70900]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:09:33 localhost python3[70918]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:33 localhost python3[70948]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:09:33 localhost systemd[1]: Reloading. Feb 23 03:09:33 localhost systemd-sysv-generator[70978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:33 localhost systemd-rc-local-generator[70973]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:33 localhost systemd[1]: Starting Create netns directory... Feb 23 03:09:33 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 23 03:09:33 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 23 03:09:33 localhost systemd[1]: Finished Create netns directory. Feb 23 03:09:34 localhost python3[71007]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 23 03:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:09:36 localhost systemd[1]: tmp-crun.KB8Lkx.mount: Deactivated successfully. Feb 23 03:09:36 localhost podman[71067]: 2026-02-23 08:09:36.358310328 +0000 UTC m=+0.104016272 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, vcs-type=git, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:09:36 localhost podman[71067]: 2026-02-23 08:09:36.374702129 +0000 UTC m=+0.120408103 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd) Feb 23 03:09:36 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:09:36 localhost python3[71068]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 23 03:09:36 localhost podman[71247]: 2026-02-23 08:09:36.736865874 +0000 UTC m=+0.061271705 container create ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Feb 23 03:09:36 localhost podman[71246]: 2026-02-23 08:09:36.765168899 +0000 UTC m=+0.090701545 container create 23eab3bb08665481b398032ed92fc31d42796abb8e70ee98d74e5eb32c217e9d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=configure_cms_options, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:09:36 localhost systemd[1]: Started libpod-conmon-ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.scope. Feb 23 03:09:36 localhost podman[71244]: 2026-02-23 08:09:36.78057816 +0000 UTC m=+0.110165780 container create 26db5f5c14488f21323e61b58b029c32a0588170c230aa002bdb26ce89fe0295 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, distribution-scope=public, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, container_name=nova_libvirt_init_secret, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step4, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:09:36 localhost podman[71253]: 2026-02-23 08:09:36.792937288 +0000 UTC m=+0.109239271 container create a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com) Feb 23 03:09:36 localhost systemd[1]: Started libcrun container. Feb 23 03:09:36 localhost systemd[1]: Started libpod-conmon-23eab3bb08665481b398032ed92fc31d42796abb8e70ee98d74e5eb32c217e9d.scope. Feb 23 03:09:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9e45d9514da84d6daa9fa4a7ce739f1d3aea192977320b7cd282ec5c552f31/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:36 localhost podman[71244]: 2026-02-23 08:09:36.703128212 +0000 UTC m=+0.032715822 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 23 03:09:36 localhost podman[71247]: 2026-02-23 08:09:36.705698731 +0000 UTC m=+0.030104572 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 23 03:09:36 localhost systemd[1]: Started libcrun container. Feb 23 03:09:36 localhost systemd[1]: Started libpod-conmon-a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.scope. Feb 23 03:09:36 localhost podman[71246]: 2026-02-23 08:09:36.714567202 +0000 UTC m=+0.040099858 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 23 03:09:36 localhost podman[71253]: 2026-02-23 08:09:36.716619974 +0000 UTC m=+0.032922017 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Feb 23 03:09:36 localhost podman[71306]: 2026-02-23 08:09:36.820187831 +0000 UTC m=+0.069812285 container create 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vcs-type=git, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:09:36 localhost systemd[1]: Started libcrun container. Feb 23 03:09:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143338a49c9acf9635b1502d1345d3ee14aaf761c2ec280ecc67cd9c1fe8adb0/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:09:36 localhost podman[71247]: 2026-02-23 08:09:36.825521434 +0000 UTC m=+0.149927275 container init ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, version=17.1.13, url=https://www.redhat.com, release=1766032510, tcib_managed=true) Feb 23 03:09:36 localhost systemd[1]: Started libpod-conmon-8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.scope. Feb 23 03:09:36 localhost systemd[1]: Started libpod-conmon-26db5f5c14488f21323e61b58b029c32a0588170c230aa002bdb26ce89fe0295.scope. Feb 23 03:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:09:36 localhost podman[71247]: 2026-02-23 08:09:36.863059803 +0000 UTC m=+0.187465624 container start ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z) Feb 23 03:09:36 localhost systemd[1]: Started libcrun container. Feb 23 03:09:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1d734fa5d6fa4a105819fcbe3ae6278295f7115eb830775cb18f638504a55ec/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:36 localhost systemd[1]: Started libcrun container. Feb 23 03:09:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4778b0ef4c2e8c866402511ee6607630635a75600d60e96241cc88865a886057/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4778b0ef4c2e8c866402511ee6607630635a75600d60e96241cc88865a886057/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4778b0ef4c2e8c866402511ee6607630635a75600d60e96241cc88865a886057/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:36 localhost python3[71068]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=209b2ea170f45545f80720644a8137d3 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 23 03:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:09:36 localhost podman[71253]: 2026-02-23 08:09:36.875166872 +0000 UTC m=+0.191468915 container init a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, distribution-scope=public) Feb 23 03:09:36 localhost podman[71246]: 2026-02-23 08:09:36.875802832 +0000 UTC m=+0.201335468 container init 23eab3bb08665481b398032ed92fc31d42796abb8e70ee98d74e5eb32c217e9d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=configure_cms_options, managed_by=tripleo_ansible, vcs-type=git) Feb 23 03:09:36 localhost podman[71246]: 2026-02-23 08:09:36.885387305 +0000 UTC m=+0.210919951 container start 23eab3bb08665481b398032ed92fc31d42796abb8e70ee98d74e5eb32c217e9d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, container_name=configure_cms_options, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, release=1766032510, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, tcib_managed=true, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git) Feb 23 03:09:36 localhost podman[71246]: 2026-02-23 08:09:36.885599722 +0000 UTC m=+0.211132358 container attach 23eab3bb08665481b398032ed92fc31d42796abb8e70ee98d74e5eb32c217e9d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, vcs-type=git, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:09:36 localhost podman[71306]: 2026-02-23 08:09:36.793183776 +0000 UTC m=+0.042808240 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 23 03:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:09:36 localhost podman[71253]: 2026-02-23 08:09:36.90845274 +0000 UTC m=+0.224754723 container start a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Feb 23 03:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:09:36 localhost python3[71068]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=209b2ea170f45545f80720644a8137d3 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Feb 23 03:09:36 localhost podman[71306]: 2026-02-23 08:09:36.920727955 +0000 UTC m=+0.170352409 container init 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, container_name=logrotate_crond, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, io.buildah.version=1.41.5, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64) Feb 23 03:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:09:36 localhost podman[71306]: 2026-02-23 08:09:36.956314064 +0000 UTC m=+0.205938518 container start 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64) Feb 23 03:09:36 localhost podman[71344]: 2026-02-23 08:09:36.957721177 +0000 UTC m=+0.090421246 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20260112.1, release=1766032510, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 23 03:09:36 localhost python3[71068]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 23 03:09:36 localhost ovs-vsctl[71417]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Feb 23 03:09:36 localhost systemd[1]: libpod-23eab3bb08665481b398032ed92fc31d42796abb8e70ee98d74e5eb32c217e9d.scope: Deactivated successfully. Feb 23 03:09:36 localhost podman[71244]: 2026-02-23 08:09:36.983128394 +0000 UTC m=+0.312716004 container init 26db5f5c14488f21323e61b58b029c32a0588170c230aa002bdb26ce89fe0295 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-libvirt, tcib_managed=true, vcs-type=git, container_name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, distribution-scope=public, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 03:09:37 localhost podman[71344]: 2026-02-23 08:09:37.005689894 +0000 UTC m=+0.138389973 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64) Feb 23 03:09:37 localhost podman[71344]: unhealthy Feb 23 03:09:37 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:09:37 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Failed with result 'exit-code'. Feb 23 03:09:37 localhost podman[71244]: 2026-02-23 08:09:37.051247347 +0000 UTC m=+0.380834957 container start 26db5f5c14488f21323e61b58b029c32a0588170c230aa002bdb26ce89fe0295 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, org.opencontainers.image.created=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, release=1766032510, container_name=nova_libvirt_init_secret, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, url=https://www.redhat.com, architecture=x86_64, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vendor=Red Hat, Inc., build-date=2026-01-12T23:31:49Z) Feb 23 03:09:37 localhost podman[71244]: 2026-02-23 08:09:37.051502505 +0000 UTC m=+0.381090115 container attach 26db5f5c14488f21323e61b58b029c32a0588170c230aa002bdb26ce89fe0295 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_libvirt_init_secret, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., release=1766032510) Feb 23 03:09:37 localhost podman[71246]: 2026-02-23 08:09:37.078252103 +0000 UTC m=+0.403784759 container died 23eab3bb08665481b398032ed92fc31d42796abb8e70ee98d74e5eb32c217e9d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, build-date=2026-01-12T22:36:40Z, release=1766032510, vcs-type=git, io.openshift.expose-services=, container_name=configure_cms_options, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:09:37 localhost systemd[1]: libpod-26db5f5c14488f21323e61b58b029c32a0588170c230aa002bdb26ce89fe0295.scope: Deactivated successfully. Feb 23 03:09:37 localhost podman[71244]: 2026-02-23 08:09:37.088204327 +0000 UTC m=+0.417791937 container died 26db5f5c14488f21323e61b58b029c32a0588170c230aa002bdb26ce89fe0295 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.buildah.version=1.41.5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Feb 23 03:09:37 localhost podman[71425]: 2026-02-23 08:09:37.109661403 +0000 UTC m=+0.118254937 container cleanup 23eab3bb08665481b398032ed92fc31d42796abb8e70ee98d74e5eb32c217e9d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.buildah.version=1.41.5, container_name=configure_cms_options, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, release=1766032510, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:36:40Z, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:09:37 localhost podman[71397]: 2026-02-23 08:09:37.009413938 +0000 UTC m=+0.059566343 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=logrotate_crond, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container) Feb 23 03:09:37 localhost systemd[1]: libpod-conmon-23eab3bb08665481b398032ed92fc31d42796abb8e70ee98d74e5eb32c217e9d.scope: Deactivated successfully. Feb 23 03:09:37 localhost python3[71068]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1771832380 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Feb 23 03:09:37 localhost podman[71487]: 2026-02-23 08:09:37.142442805 +0000 UTC m=+0.043798970 container cleanup 26db5f5c14488f21323e61b58b029c32a0588170c230aa002bdb26ce89fe0295 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, container_name=nova_libvirt_init_secret, version=17.1.13, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, batch=17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git) Feb 23 03:09:37 localhost podman[71397]: 2026-02-23 08:09:37.142647852 +0000 UTC m=+0.192800257 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, container_name=logrotate_crond, release=1766032510, config_id=tripleo_step4, url=https://www.redhat.com) Feb 23 03:09:37 localhost systemd[1]: libpod-conmon-26db5f5c14488f21323e61b58b029c32a0588170c230aa002bdb26ce89fe0295.scope: Deactivated successfully. Feb 23 03:09:37 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:09:37 localhost python3[71068]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=d8e86b11aed37635c57249fefb951044 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Feb 23 03:09:37 localhost podman[71368]: 2026-02-23 08:09:37.280712804 +0000 UTC m=+0.374772032 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, version=17.1.13, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Feb 23 03:09:37 localhost podman[71368]: 2026-02-23 08:09:37.289595785 +0000 UTC m=+0.383655043 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.13, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:09:37 localhost podman[71368]: unhealthy Feb 23 03:09:37 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:09:37 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Failed with result 'exit-code'. Feb 23 03:09:37 localhost podman[71630]: 2026-02-23 08:09:37.488943431 +0000 UTC m=+0.064619597 container create 22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, vcs-type=git, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=setup_ovs_manager, tcib_managed=true) Feb 23 03:09:37 localhost systemd[1]: Started libpod-conmon-22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b.scope. Feb 23 03:09:37 localhost systemd[1]: Started libcrun container. Feb 23 03:09:37 localhost podman[71630]: 2026-02-23 08:09:37.555565248 +0000 UTC m=+0.131241364 container init 22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=setup_ovs_manager, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, version=17.1.13, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc.) Feb 23 03:09:37 localhost podman[71630]: 2026-02-23 08:09:37.45685008 +0000 UTC m=+0.032526276 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 23 03:09:37 localhost podman[71646]: 2026-02-23 08:09:37.56773588 +0000 UTC m=+0.110561601 container create 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, build-date=2026-01-12T23:32:04Z, release=1766032510, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5) Feb 23 03:09:37 localhost systemd[1]: Started libpod-conmon-4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.scope. Feb 23 03:09:37 localhost systemd[1]: Started libcrun container. Feb 23 03:09:37 localhost podman[71646]: 2026-02-23 08:09:37.51409308 +0000 UTC m=+0.056918831 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 23 03:09:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8308c71cbe12c222d2d2f2aba4033b465662c05472c3bcfaab28cf167545b8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:37 localhost podman[71630]: 2026-02-23 08:09:37.621270567 +0000 UTC m=+0.196946733 container start 22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, release=1766032510, config_id=tripleo_step4, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:09:37 localhost podman[71630]: 2026-02-23 08:09:37.621588616 +0000 UTC m=+0.197264832 container attach 22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, container_name=setup_ovs_manager, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team) Feb 23 03:09:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:09:37 localhost podman[71646]: 2026-02-23 08:09:37.64458092 +0000 UTC m=+0.187406671 container init 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1766032510, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:09:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:09:37 localhost podman[71646]: 2026-02-23 08:09:37.67857235 +0000 UTC m=+0.221398071 container start 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:09:37 localhost python3[71068]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8e86b11aed37635c57249fefb951044 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 23 03:09:37 localhost podman[71676]: 2026-02-23 08:09:37.770872082 +0000 UTC m=+0.084181135 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_migration_target, url=https://www.redhat.com, vcs-type=git, version=17.1.13, release=1766032510, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team) Feb 23 03:09:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:09:38 localhost podman[71676]: 2026-02-23 08:09:38.101385679 +0000 UTC m=+0.414694812 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, container_name=nova_migration_target, version=17.1.13, managed_by=tripleo_ansible, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:09:38 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:09:38 localhost podman[71732]: 2026-02-23 08:09:38.149605263 +0000 UTC m=+0.051739373 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, architecture=x86_64, container_name=iscsid, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container) Feb 23 03:09:38 localhost podman[71732]: 2026-02-23 08:09:38.161040873 +0000 UTC m=+0.063175013 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, container_name=iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, version=17.1.13) Feb 23 03:09:38 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:09:38 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Feb 23 03:09:39 localhost sshd[71807]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:09:40 localhost ovs-vsctl[71867]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Feb 23 03:09:40 localhost systemd[1]: libpod-22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b.scope: Deactivated successfully. Feb 23 03:09:40 localhost systemd[1]: libpod-22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b.scope: Consumed 2.908s CPU time. Feb 23 03:09:40 localhost podman[71630]: 2026-02-23 08:09:40.578617549 +0000 UTC m=+3.154293745 container died 22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=setup_ovs_manager, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1766032510) Feb 23 03:09:40 localhost systemd[1]: tmp-crun.V09p9O.mount: Deactivated successfully. Feb 23 03:09:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b-userdata-shm.mount: Deactivated successfully. Feb 23 03:09:40 localhost systemd[1]: var-lib-containers-storage-overlay-eb56d2be600abd91f7cb1c2bbbc694a44b833ed172b1fd20eb26910c45bd736c-merged.mount: Deactivated successfully. Feb 23 03:09:40 localhost podman[71868]: 2026-02-23 08:09:40.657607934 +0000 UTC m=+0.070316081 container cleanup 22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, vcs-type=git, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, container_name=setup_ovs_manager, tcib_managed=true, io.buildah.version=1.41.5, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64) Feb 23 03:09:40 localhost systemd[1]: libpod-conmon-22e90105791e5feb5b851736f9db91ab9da033a48e2126fa40df824ffce8589b.scope: Deactivated successfully. Feb 23 03:09:40 localhost python3[71068]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1771832380 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771832380'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Feb 23 03:09:41 localhost podman[71986]: 2026-02-23 08:09:41.106423748 +0000 UTC m=+0.082685679 container create 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, release=1766032510, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container) Feb 23 03:09:41 localhost podman[71992]: 2026-02-23 08:09:41.128355789 +0000 UTC m=+0.088729204 container create 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, build-date=2026-01-12T22:56:19Z, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-type=git) Feb 23 03:09:41 localhost systemd[1]: Started libpod-conmon-393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.scope. Feb 23 03:09:41 localhost systemd[1]: Started libpod-conmon-71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.scope. Feb 23 03:09:41 localhost podman[71986]: 2026-02-23 08:09:41.058803872 +0000 UTC m=+0.035065863 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 23 03:09:41 localhost systemd[1]: Started libcrun container. Feb 23 03:09:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13f697d185a99a8a222763d80bff47873cb03d19648b0bcc96a8114f853f9bd/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13f697d185a99a8a222763d80bff47873cb03d19648b0bcc96a8114f853f9bd/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13f697d185a99a8a222763d80bff47873cb03d19648b0bcc96a8114f853f9bd/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:41 localhost systemd[1]: Started libcrun container. Feb 23 03:09:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add23e82cdf75351add1828e63e379f4d01e1b7178f4868cd8802b773b6e9f43/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add23e82cdf75351add1828e63e379f4d01e1b7178f4868cd8802b773b6e9f43/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add23e82cdf75351add1828e63e379f4d01e1b7178f4868cd8802b773b6e9f43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 03:09:41 localhost podman[71992]: 2026-02-23 08:09:41.076155443 +0000 UTC m=+0.036528828 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 23 03:09:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:09:41 localhost podman[71986]: 2026-02-23 08:09:41.199301919 +0000 UTC m=+0.175563880 container init 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller, io.openshift.expose-services=, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:09:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:09:41 localhost podman[71992]: 2026-02-23 08:09:41.202900279 +0000 UTC m=+0.163273644 container init 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:09:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:09:41 localhost systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring. Feb 23 03:09:41 localhost podman[71986]: 2026-02-23 08:09:41.242625873 +0000 UTC m=+0.218887804 container start 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:09:41 localhost python3[71068]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 23 03:09:41 localhost systemd[1]: Created slice User Slice of UID 0. Feb 23 03:09:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:09:41 localhost podman[71992]: 2026-02-23 08:09:41.271396253 +0000 UTC m=+0.231769648 container start 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z) Feb 23 03:09:41 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 23 03:09:41 localhost python3[71068]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=c586877f5206c4d4c0260095c70d518d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 23 03:09:41 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 23 03:09:41 localhost systemd[1]: Starting User Manager for UID 0... Feb 23 03:09:41 localhost podman[72031]: 2026-02-23 08:09:41.342491437 +0000 UTC m=+0.093425428 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, config_id=tripleo_step4, architecture=x86_64, release=1766032510, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:09:41 localhost podman[72031]: 2026-02-23 08:09:41.385839892 +0000 UTC m=+0.136773933 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:09:41 localhost podman[72031]: unhealthy Feb 23 03:09:41 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:09:41 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:09:41 localhost systemd[72058]: Queued start job for default target Main User Target. Feb 23 03:09:41 localhost systemd[72058]: Created slice User Application Slice. Feb 23 03:09:41 localhost systemd[72058]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 23 03:09:41 localhost systemd[72058]: Started Daily Cleanup of User's Temporary Directories. Feb 23 03:09:41 localhost systemd[72058]: Reached target Paths. Feb 23 03:09:41 localhost systemd[72058]: Reached target Timers. Feb 23 03:09:41 localhost systemd[72058]: Starting D-Bus User Message Bus Socket... Feb 23 03:09:41 localhost systemd[72058]: Starting Create User's Volatile Files and Directories... Feb 23 03:09:41 localhost systemd[72058]: Listening on D-Bus User Message Bus Socket. Feb 23 03:09:41 localhost systemd[72058]: Reached target Sockets. Feb 23 03:09:41 localhost systemd[72058]: Finished Create User's Volatile Files and Directories. Feb 23 03:09:41 localhost systemd[72058]: Reached target Basic System. Feb 23 03:09:41 localhost systemd[72058]: Reached target Main User Target. Feb 23 03:09:41 localhost systemd[72058]: Startup finished in 127ms. Feb 23 03:09:41 localhost systemd[1]: Started User Manager for UID 0. Feb 23 03:09:41 localhost systemd[1]: Started Session c9 of User root. Feb 23 03:09:41 localhost podman[72043]: 2026-02-23 08:09:41.480305001 +0000 UTC m=+0.200275215 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, batch=17.1_20260112.1, tcib_managed=true) Feb 23 03:09:41 localhost podman[72043]: 2026-02-23 08:09:41.490138401 +0000 UTC m=+0.210108595 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 23 03:09:41 localhost podman[72043]: unhealthy Feb 23 03:09:41 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:09:41 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:09:41 localhost systemd[1]: session-c9.scope: Deactivated successfully. Feb 23 03:09:41 localhost kernel: device br-int entered promiscuous mode Feb 23 03:09:41 localhost NetworkManager[5987]: [1771834181.6186] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Feb 23 03:09:41 localhost systemd-udevd[72140]: Network interface NamePolicy= disabled on kernel command line. Feb 23 03:09:41 localhost python3[72160]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:42 localhost python3[72176]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:42 localhost python3[72192]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:42 localhost python3[72208]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:42 localhost python3[72224]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:43 localhost python3[72243]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:43 localhost kernel: device genev_sys_6081 entered promiscuous mode Feb 23 03:09:43 localhost NetworkManager[5987]: [1771834183.4886] device (genev_sys_6081): carrier: link connected Feb 23 03:09:43 localhost NetworkManager[5987]: [1771834183.4889] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Feb 23 03:09:43 localhost python3[72260]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:09:43 localhost python3[72281]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:09:43 localhost python3[72299]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:09:44 localhost python3[72315]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:09:44 localhost podman[72332]: 2026-02-23 08:09:44.433870357 +0000 UTC m=+0.088862818 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr) Feb 23 03:09:44 localhost python3[72331]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:09:44 localhost podman[72332]: 2026-02-23 08:09:44.62299026 +0000 UTC m=+0.277982721 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd) Feb 23 03:09:44 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:09:44 localhost python3[72376]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:09:45 localhost python3[72438]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834184.8290167-109921-191457539442995/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:45 localhost python3[72467]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834184.8290167-109921-191457539442995/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:46 localhost python3[72496]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834184.8290167-109921-191457539442995/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:46 localhost python3[72525]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834184.8290167-109921-191457539442995/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:47 localhost python3[72554]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834184.8290167-109921-191457539442995/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:47 localhost python3[72583]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834184.8290167-109921-191457539442995/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:48 localhost systemd-rc-local-generator[72624]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:48 localhost systemd-sysv-generator[72628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:48 localhost python3[72599]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 03:09:48 localhost systemd[1]: Reloading. Feb 23 03:09:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:49 localhost python3[72651]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:09:49 localhost systemd[1]: Reloading. Feb 23 03:09:49 localhost systemd-rc-local-generator[72680]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:49 localhost systemd-sysv-generator[72684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:49 localhost systemd[1]: Starting ceilometer_agent_compute container... Feb 23 03:09:50 localhost tripleo-start-podman-container[72690]: Creating additional drop-in dependency for "ceilometer_agent_compute" (a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17) Feb 23 03:09:50 localhost systemd[1]: Reloading. Feb 23 03:09:50 localhost systemd-rc-local-generator[72751]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:50 localhost systemd-sysv-generator[72754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:50 localhost systemd[1]: Started ceilometer_agent_compute container. Feb 23 03:09:50 localhost python3[72775]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:09:51 localhost systemd[1]: Reloading. Feb 23 03:09:51 localhost systemd-rc-local-generator[72801]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:51 localhost systemd-sysv-generator[72804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:51 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Feb 23 03:09:51 localhost systemd[1]: Started ceilometer_agent_ipmi container. Feb 23 03:09:51 localhost systemd[1]: Stopping User Manager for UID 0... Feb 23 03:09:51 localhost systemd[72058]: Activating special unit Exit the Session... Feb 23 03:09:51 localhost systemd[72058]: Stopped target Main User Target. Feb 23 03:09:51 localhost systemd[72058]: Stopped target Basic System. Feb 23 03:09:51 localhost systemd[72058]: Stopped target Paths. Feb 23 03:09:51 localhost systemd[72058]: Stopped target Sockets. Feb 23 03:09:51 localhost systemd[72058]: Stopped target Timers. Feb 23 03:09:51 localhost systemd[72058]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 03:09:51 localhost systemd[72058]: Closed D-Bus User Message Bus Socket. Feb 23 03:09:51 localhost systemd[72058]: Stopped Create User's Volatile Files and Directories. Feb 23 03:09:51 localhost systemd[72058]: Removed slice User Application Slice. Feb 23 03:09:51 localhost systemd[72058]: Reached target Shutdown. Feb 23 03:09:51 localhost systemd[72058]: Finished Exit the Session. Feb 23 03:09:51 localhost systemd[72058]: Reached target Exit the Session. Feb 23 03:09:51 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 23 03:09:51 localhost systemd[1]: Stopped User Manager for UID 0. Feb 23 03:09:51 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 23 03:09:51 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 23 03:09:51 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 23 03:09:51 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 23 03:09:51 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 23 03:09:52 localhost python3[72842]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:09:52 localhost systemd[1]: Reloading. Feb 23 03:09:52 localhost systemd-rc-local-generator[72864]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:52 localhost systemd-sysv-generator[72870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:52 localhost systemd[1]: Starting logrotate_crond container... Feb 23 03:09:52 localhost systemd[1]: Started logrotate_crond container. Feb 23 03:09:53 localhost python3[72911]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:09:53 localhost systemd[1]: Reloading. Feb 23 03:09:53 localhost systemd-rc-local-generator[72935]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:53 localhost systemd-sysv-generator[72941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:53 localhost systemd[1]: Starting nova_migration_target container... Feb 23 03:09:53 localhost systemd[1]: Started nova_migration_target container. Feb 23 03:09:54 localhost python3[72979]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:09:54 localhost systemd[1]: Reloading. Feb 23 03:09:54 localhost systemd-rc-local-generator[73001]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:54 localhost systemd-sysv-generator[73010]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:54 localhost systemd[1]: Starting ovn_controller container... Feb 23 03:09:55 localhost tripleo-start-podman-container[73019]: Creating additional drop-in dependency for "ovn_controller" (393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2) Feb 23 03:09:55 localhost systemd[1]: Reloading. Feb 23 03:09:55 localhost systemd-rc-local-generator[73077]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:55 localhost systemd-sysv-generator[73080]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:55 localhost systemd[1]: Started ovn_controller container. Feb 23 03:09:56 localhost python3[73102]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:09:56 localhost systemd[1]: Reloading. Feb 23 03:09:56 localhost systemd-sysv-generator[73129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:09:56 localhost systemd-rc-local-generator[73126]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:09:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:09:56 localhost systemd[1]: Starting ovn_metadata_agent container... Feb 23 03:09:56 localhost systemd[1]: Started ovn_metadata_agent container. Feb 23 03:09:56 localhost python3[73183]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:58 localhost python3[73304]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005626465 step=4 update_config_hash_only=False Feb 23 03:09:59 localhost python3[73320]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:09:59 localhost python3[73337]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 23 03:10:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:10:07 localhost podman[73338]: 2026-02-23 08:10:07.001055317 +0000 UTC m=+0.077567813 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=collectd, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container) Feb 23 03:10:07 localhost podman[73338]: 2026-02-23 08:10:07.012679122 +0000 UTC m=+0.089191588 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, release=1766032510, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, version=17.1.13, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:10:07 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:10:07 localhost systemd[1]: tmp-crun.pNTP4S.mount: Deactivated successfully. Feb 23 03:10:07 localhost podman[73356]: 2026-02-23 08:10:07.145624998 +0000 UTC m=+0.083662209 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.13, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Feb 23 03:10:07 localhost podman[73356]: 2026-02-23 08:10:07.172472739 +0000 UTC m=+0.110509960 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, tcib_managed=true, version=17.1.13, release=1766032510, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Feb 23 03:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:10:07 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:10:07 localhost podman[73384]: 2026-02-23 08:10:07.278993636 +0000 UTC m=+0.083005559 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.buildah.version=1.41.5, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, release=1766032510, name=rhosp-rhel9/openstack-cron) Feb 23 03:10:07 localhost podman[73384]: 2026-02-23 08:10:07.317763922 +0000 UTC m=+0.121775805 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, build-date=2026-01-12T22:10:15Z, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4) Feb 23 03:10:07 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:10:07 localhost podman[73403]: 2026-02-23 08:10:07.439014739 +0000 UTC m=+0.084027720 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:10:07 localhost podman[73403]: 2026-02-23 08:10:07.496851468 +0000 UTC m=+0.141864439 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, version=17.1.13) Feb 23 03:10:07 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:10:09 localhost podman[73431]: 2026-02-23 08:10:09.000181568 +0000 UTC m=+0.079859753 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, release=1766032510, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:10:09 localhost systemd[1]: tmp-crun.RkQtHG.mount: Deactivated successfully. Feb 23 03:10:09 localhost podman[73432]: 2026-02-23 08:10:09.071754636 +0000 UTC m=+0.149528923 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, config_id=tripleo_step3, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:10:09 localhost podman[73432]: 2026-02-23 08:10:09.083789294 +0000 UTC m=+0.161563611 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, container_name=iscsid, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, config_id=tripleo_step3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Feb 23 03:10:09 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:10:09 localhost podman[73431]: 2026-02-23 08:10:09.318842792 +0000 UTC m=+0.398520987 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_migration_target, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container) Feb 23 03:10:09 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:10:11 localhost systemd[1]: tmp-crun.AEMQBT.mount: Deactivated successfully. Feb 23 03:10:12 localhost podman[73470]: 2026-02-23 08:10:12.002643667 +0000 UTC m=+0.083731711 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:10:12 localhost podman[73470]: 2026-02-23 08:10:12.023012491 +0000 UTC m=+0.104100575 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4) Feb 23 03:10:12 localhost systemd[1]: tmp-crun.PvMGUs.mount: Deactivated successfully. Feb 23 03:10:12 localhost podman[73471]: 2026-02-23 08:10:12.06127227 +0000 UTC m=+0.137132404 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z) Feb 23 03:10:12 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:10:12 localhost podman[73471]: 2026-02-23 08:10:12.139987847 +0000 UTC m=+0.215848011 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 23 03:10:12 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:10:15 localhost podman[73516]: 2026-02-23 08:10:15.007191292 +0000 UTC m=+0.081342888 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, architecture=x86_64, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=metrics_qdr) Feb 23 03:10:15 localhost podman[73516]: 2026-02-23 08:10:15.201541805 +0000 UTC m=+0.275693411 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Feb 23 03:10:15 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:10:21 localhost snmpd[68131]: empty variable list in _query Feb 23 03:10:21 localhost snmpd[68131]: empty variable list in _query Feb 23 03:10:29 localhost sshd[73625]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:10:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:10:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:10:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:10:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:10:38 localhost podman[73627]: 2026-02-23 08:10:38.019143766 +0000 UTC m=+0.087357592 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, release=1766032510, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64) Feb 23 03:10:38 localhost podman[73627]: 2026-02-23 08:10:38.025913104 +0000 UTC m=+0.094126880 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.13, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3) Feb 23 03:10:38 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:10:38 localhost podman[73630]: 2026-02-23 08:10:38.068464885 +0000 UTC m=+0.130077149 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.13, tcib_managed=true, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, vcs-type=git, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=) Feb 23 03:10:38 localhost podman[73630]: 2026-02-23 08:10:38.126420787 +0000 UTC m=+0.188033071 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.13, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, config_id=tripleo_step4) Feb 23 03:10:38 localhost podman[73628]: 2026-02-23 08:10:38.132670818 +0000 UTC m=+0.200896394 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 23 03:10:38 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:10:38 localhost podman[73629]: 2026-02-23 08:10:38.169083592 +0000 UTC m=+0.234437890 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:10:38 localhost podman[73629]: 2026-02-23 08:10:38.189889458 +0000 UTC m=+0.255243766 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:10:38 localhost podman[73628]: 2026-02-23 08:10:38.190227698 +0000 UTC m=+0.258453244 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20260112.1) Feb 23 03:10:38 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:10:38 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:10:39 localhost systemd[1]: tmp-crun.AcpGDR.mount: Deactivated successfully. Feb 23 03:10:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:10:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:10:40 localhost systemd[1]: tmp-crun.Z3JBVJ.mount: Deactivated successfully. Feb 23 03:10:40 localhost podman[73717]: 2026-02-23 08:10:40.013022967 +0000 UTC m=+0.088168727 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1) Feb 23 03:10:40 localhost podman[73718]: 2026-02-23 08:10:40.05922262 +0000 UTC m=+0.132588996 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:10:40 localhost podman[73718]: 2026-02-23 08:10:40.0697103 +0000 UTC m=+0.143076696 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, distribution-scope=public, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:10:40 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:10:40 localhost podman[73717]: 2026-02-23 08:10:40.416995479 +0000 UTC m=+0.492141249 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:10:40 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:10:42 localhost podman[73758]: 2026-02-23 08:10:42.996561529 +0000 UTC m=+0.066692591 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, io.openshift.expose-services=) Feb 23 03:10:43 localhost podman[73757]: 2026-02-23 08:10:43.05186651 +0000 UTC m=+0.126835729 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 23 03:10:43 localhost podman[73758]: 2026-02-23 08:10:43.06397524 +0000 UTC m=+0.134106292 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64) Feb 23 03:10:43 localhost podman[73757]: 2026-02-23 08:10:43.073715699 +0000 UTC m=+0.148684938 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z) Feb 23 03:10:43 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:10:43 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:10:45 localhost systemd[1]: tmp-crun.56zVyR.mount: Deactivated successfully. Feb 23 03:10:46 localhost podman[73803]: 2026-02-23 08:10:46.006763927 +0000 UTC m=+0.087613691 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr) Feb 23 03:10:46 localhost podman[73803]: 2026-02-23 08:10:46.197734996 +0000 UTC m=+0.278584760 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Feb 23 03:10:46 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:11:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:11:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:11:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:11:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:11:09 localhost systemd[1]: tmp-crun.Z5qIYH.mount: Deactivated successfully. Feb 23 03:11:09 localhost podman[73837]: 2026-02-23 08:11:09.012777916 +0000 UTC m=+0.087862357 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1) Feb 23 03:11:09 localhost podman[73837]: 2026-02-23 08:11:09.033668985 +0000 UTC m=+0.108753446 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, version=17.1.13, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5) Feb 23 03:11:09 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:11:09 localhost podman[73838]: 2026-02-23 08:11:09.072054769 +0000 UTC m=+0.142046625 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1) Feb 23 03:11:09 localhost podman[73836]: 2026-02-23 08:11:09.109074951 +0000 UTC m=+0.183201443 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:11:09 localhost podman[73838]: 2026-02-23 08:11:09.117580761 +0000 UTC m=+0.187572607 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, architecture=x86_64) Feb 23 03:11:09 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:11:09 localhost podman[73836]: 2026-02-23 08:11:09.14764788 +0000 UTC m=+0.221774362 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:11:09 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:11:09 localhost podman[73835]: 2026-02-23 08:11:09.165997642 +0000 UTC m=+0.243073724 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, version=17.1.13) Feb 23 03:11:09 localhost podman[73835]: 2026-02-23 08:11:09.177791942 +0000 UTC m=+0.254868094 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container) Feb 23 03:11:09 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:11:10 localhost sshd[73928]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:11:11 localhost systemd[1]: tmp-crun.iaUq8t.mount: Deactivated successfully. Feb 23 03:11:11 localhost podman[73929]: 2026-02-23 08:11:11.019029344 +0000 UTC m=+0.091013993 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 23 03:11:11 localhost podman[73930]: 2026-02-23 08:11:11.067695712 +0000 UTC m=+0.137200475 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, release=1766032510, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:11:11 localhost podman[73930]: 2026-02-23 08:11:11.103790016 +0000 UTC m=+0.173294729 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=iscsid, batch=17.1_20260112.1, release=1766032510, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:11:11 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:11:11 localhost podman[73929]: 2026-02-23 08:11:11.427853146 +0000 UTC m=+0.499837785 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, tcib_managed=true, version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, release=1766032510, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Feb 23 03:11:11 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:11:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:11:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:11:14 localhost systemd[1]: tmp-crun.3fevdy.mount: Deactivated successfully. Feb 23 03:11:14 localhost podman[73970]: 2026-02-23 08:11:14.023375203 +0000 UTC m=+0.092389457 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:11:14 localhost systemd[1]: tmp-crun.jcCd8i.mount: Deactivated successfully. Feb 23 03:11:14 localhost podman[73971]: 2026-02-23 08:11:14.065053776 +0000 UTC m=+0.132880164 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Feb 23 03:11:14 localhost podman[73970]: 2026-02-23 08:11:14.079732726 +0000 UTC m=+0.148746970 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, version=17.1.13, release=1766032510, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:11:14 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:11:14 localhost podman[73971]: 2026-02-23 08:11:14.13382661 +0000 UTC m=+0.201652958 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true) Feb 23 03:11:14 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:11:15 localhost sshd[74017]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:11:17 localhost podman[74019]: 2026-02-23 08:11:17.000350144 +0000 UTC m=+0.076469589 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, container_name=metrics_qdr, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:11:17 localhost podman[74019]: 2026-02-23 08:11:17.232779141 +0000 UTC m=+0.308898586 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step1) Feb 23 03:11:17 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:11:26 localhost podman[74149]: 2026-02-23 08:11:26.645587752 +0000 UTC m=+0.092377356 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, version=7, vcs-type=git, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph) Feb 23 03:11:26 localhost podman[74149]: 2026-02-23 08:11:26.726257919 +0000 UTC m=+0.173047513 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 23 03:11:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:11:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:11:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:11:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:11:40 localhost podman[74292]: 2026-02-23 08:11:40.019416284 +0000 UTC m=+0.088034962 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:11:40 localhost podman[74292]: 2026-02-23 08:11:40.030209665 +0000 UTC m=+0.098828303 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, tcib_managed=true, version=17.1.13, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:11:40 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:11:40 localhost systemd[1]: tmp-crun.1liCx1.mount: Deactivated successfully. Feb 23 03:11:40 localhost podman[74293]: 2026-02-23 08:11:40.122204678 +0000 UTC m=+0.190347172 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:11:40 localhost podman[74293]: 2026-02-23 08:11:40.134785552 +0000 UTC m=+0.202928036 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, release=1766032510, vcs-type=git, batch=17.1_20260112.1, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, version=17.1.13, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:11:40 localhost podman[74295]: 2026-02-23 08:11:40.164661036 +0000 UTC m=+0.226531068 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 23 03:11:40 localhost podman[74294]: 2026-02-23 08:11:40.228022624 +0000 UTC m=+0.292995931 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, tcib_managed=true, batch=17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, version=17.1.13, maintainer=OpenStack TripleO Team) Feb 23 03:11:40 localhost podman[74295]: 2026-02-23 08:11:40.244019152 +0000 UTC m=+0.305889184 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, version=17.1.13, release=1766032510, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 23 03:11:40 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:11:40 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:11:40 localhost podman[74294]: 2026-02-23 08:11:40.25997584 +0000 UTC m=+0.324949137 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, release=1766032510, version=17.1.13, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4) Feb 23 03:11:40 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:11:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:11:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:11:42 localhost podman[74381]: 2026-02-23 08:11:42.008705874 +0000 UTC m=+0.081902875 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:11:42 localhost podman[74382]: 2026-02-23 08:11:41.987614929 +0000 UTC m=+0.061376567 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, tcib_managed=true, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3) Feb 23 03:11:42 localhost podman[74382]: 2026-02-23 08:11:42.070781893 +0000 UTC m=+0.144543461 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, release=1766032510, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Feb 23 03:11:42 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:11:42 localhost podman[74381]: 2026-02-23 08:11:42.369667042 +0000 UTC m=+0.442864053 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:11:42 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:11:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:11:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:11:45 localhost systemd[1]: tmp-crun.2awI7B.mount: Deactivated successfully. Feb 23 03:11:45 localhost podman[74422]: 2026-02-23 08:11:45.005292985 +0000 UTC m=+0.080562765 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.13, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Feb 23 03:11:45 localhost systemd[1]: tmp-crun.Drfm0X.mount: Deactivated successfully. Feb 23 03:11:45 localhost podman[74422]: 2026-02-23 08:11:45.055910093 +0000 UTC m=+0.131179802 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1766032510, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:11:45 localhost podman[74423]: 2026-02-23 08:11:45.056047567 +0000 UTC m=+0.131509072 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:11:45 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:11:45 localhost podman[74423]: 2026-02-23 08:11:45.126933215 +0000 UTC m=+0.202394700 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 23 03:11:45 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:11:48 localhost podman[74468]: 2026-02-23 08:11:48.094357695 +0000 UTC m=+0.168744861 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git) Feb 23 03:11:48 localhost podman[74468]: 2026-02-23 08:11:48.276223867 +0000 UTC m=+0.350611013 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.5, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, release=1766032510, com.redhat.component=openstack-qdrouterd-container, version=17.1.13) Feb 23 03:11:48 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:12:01 localhost sshd[74497]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:12:11 localhost podman[74501]: 2026-02-23 08:12:11.023511392 +0000 UTC m=+0.080931384 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, container_name=ceilometer_agent_compute) Feb 23 03:12:11 localhost systemd[1]: tmp-crun.qwZ5CW.mount: Deactivated successfully. Feb 23 03:12:11 localhost podman[74499]: 2026-02-23 08:12:11.068481868 +0000 UTC m=+0.137424875 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, vcs-type=git, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:12:11 localhost podman[74501]: 2026-02-23 08:12:11.074317198 +0000 UTC m=+0.131737110 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=) Feb 23 03:12:11 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:12:11 localhost podman[74499]: 2026-02-23 08:12:11.125545746 +0000 UTC m=+0.194488833 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd) Feb 23 03:12:11 localhost podman[74507]: 2026-02-23 08:12:11.124486083 +0000 UTC m=+0.180003787 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:12:11 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:12:11 localhost podman[74500]: 2026-02-23 08:12:11.165015511 +0000 UTC m=+0.229517552 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, version=17.1.13) Feb 23 03:12:11 localhost podman[74507]: 2026-02-23 08:12:11.204469367 +0000 UTC m=+0.259986981 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.5) Feb 23 03:12:11 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:12:11 localhost podman[74500]: 2026-02-23 08:12:11.228540289 +0000 UTC m=+0.293042380 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1766032510, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible) Feb 23 03:12:11 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:12:12 localhost systemd[1]: tmp-crun.rRUpmh.mount: Deactivated successfully. Feb 23 03:12:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:12:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:12:13 localhost podman[74593]: 2026-02-23 08:12:13.009672854 +0000 UTC m=+0.076627422 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:12:13 localhost systemd[1]: tmp-crun.tJojNl.mount: Deactivated successfully. Feb 23 03:12:13 localhost podman[74594]: 2026-02-23 08:12:13.076855674 +0000 UTC m=+0.138872590 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, version=17.1.13, container_name=iscsid, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5) Feb 23 03:12:13 localhost podman[74594]: 2026-02-23 08:12:13.088850373 +0000 UTC m=+0.150867369 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, version=17.1.13, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 23 03:12:13 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:12:13 localhost podman[74593]: 2026-02-23 08:12:13.372256394 +0000 UTC m=+0.439210962 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.buildah.version=1.41.5, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public) Feb 23 03:12:13 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:12:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:12:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:12:16 localhost podman[74636]: 2026-02-23 08:12:16.001897711 +0000 UTC m=+0.075981222 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:12:16 localhost podman[74637]: 2026-02-23 08:12:16.041708887 +0000 UTC m=+0.115616933 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.13, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, architecture=x86_64, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 23 03:12:16 localhost podman[74636]: 2026-02-23 08:12:16.072090703 +0000 UTC m=+0.146174204 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, container_name=ovn_controller, architecture=x86_64) Feb 23 03:12:16 localhost podman[74637]: 2026-02-23 08:12:16.080994247 +0000 UTC m=+0.154902273 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5) Feb 23 03:12:16 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:12:16 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:12:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:12:18 localhost podman[74683]: 2026-02-23 08:12:18.992513487 +0000 UTC m=+0.070824273 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, architecture=x86_64) Feb 23 03:12:19 localhost podman[74683]: 2026-02-23 08:12:19.178322312 +0000 UTC m=+0.256633068 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, io.buildah.version=1.41.5, tcib_managed=true, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1) Feb 23 03:12:19 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:12:42 localhost podman[74789]: 2026-02-23 08:12:42.047081251 +0000 UTC m=+0.118976447 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, version=17.1.13, vendor=Red Hat, Inc.) Feb 23 03:12:42 localhost podman[74789]: 2026-02-23 08:12:42.05515561 +0000 UTC m=+0.127050826 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 23 03:12:42 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:12:42 localhost podman[74792]: 2026-02-23 08:12:42.10449786 +0000 UTC m=+0.170444172 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public) Feb 23 03:12:42 localhost podman[74790]: 2026-02-23 08:12:42.08890635 +0000 UTC m=+0.160376502 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4) Feb 23 03:12:42 localhost podman[74791]: 2026-02-23 08:12:42.152060355 +0000 UTC m=+0.222833136 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step4) Feb 23 03:12:42 localhost podman[74790]: 2026-02-23 08:12:42.167028877 +0000 UTC m=+0.238499039 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, release=1766032510, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 23 03:12:42 localhost podman[74792]: 2026-02-23 08:12:42.177167449 +0000 UTC m=+0.243113801 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public) Feb 23 03:12:42 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:12:42 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:12:42 localhost podman[74791]: 2026-02-23 08:12:42.204107969 +0000 UTC m=+0.274880740 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, vcs-type=git) Feb 23 03:12:42 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:12:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:12:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:12:43 localhost systemd[1]: tmp-crun.OpJx6W.mount: Deactivated successfully. Feb 23 03:12:43 localhost podman[74928]: 2026-02-23 08:12:43.604192534 +0000 UTC m=+0.069529083 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, container_name=nova_migration_target, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, version=17.1.13, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true) Feb 23 03:12:43 localhost systemd[1]: tmp-crun.1FgwZo.mount: Deactivated successfully. Feb 23 03:12:43 localhost podman[74929]: 2026-02-23 08:12:43.658081934 +0000 UTC m=+0.117543742 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 23 03:12:43 localhost python3[74927]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:12:43 localhost podman[74929]: 2026-02-23 08:12:43.693366702 +0000 UTC m=+0.152828440 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:12:43 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:12:43 localhost podman[74928]: 2026-02-23 08:12:43.949157352 +0000 UTC m=+0.414493911 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:12:43 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:12:44 localhost python3[75015]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834363.3789005-114201-33885045936169/source _original_basename=tmpphxg6vr0 follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:12:44 localhost python3[75045]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:12:45 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:12:45 localhost recover_tripleo_nova_virtqemud[75097]: 62457 Feb 23 03:12:45 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:12:45 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:12:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:12:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:12:46 localhost podman[75220]: 2026-02-23 08:12:46.432257302 +0000 UTC m=+0.077711364 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:12:46 localhost podman[75220]: 2026-02-23 08:12:46.479361603 +0000 UTC m=+0.124815655 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T22:36:40Z, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, container_name=ovn_controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:12:46 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:12:46 localhost ansible-async_wrapper.py[75219]: Invoked with 141745225641 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834365.9652736-114325-278641367688784/AnsiballZ_command.py _ Feb 23 03:12:46 localhost ansible-async_wrapper.py[75269]: Starting module and watcher Feb 23 03:12:46 localhost ansible-async_wrapper.py[75269]: Start watching 75270 (3600) Feb 23 03:12:46 localhost ansible-async_wrapper.py[75270]: Start module (75270) Feb 23 03:12:46 localhost ansible-async_wrapper.py[75219]: Return async_wrapper task started. Feb 23 03:12:46 localhost podman[75221]: 2026-02-23 08:12:46.485305136 +0000 UTC m=+0.128281113 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible) Feb 23 03:12:46 localhost podman[75221]: 2026-02-23 08:12:46.568756428 +0000 UTC m=+0.211732395 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 23 03:12:46 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:12:46 localhost python3[75290]: ansible-ansible.legacy.async_status Invoked with jid=141745225641.75219 mode=status _async_dir=/tmp/.ansible_async Feb 23 03:12:48 localhost sshd[75334]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:12:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:12:49 localhost podman[75347]: 2026-02-23 08:12:49.32908626 +0000 UTC m=+0.081583565 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, config_id=tripleo_step1, vendor=Red Hat, Inc., container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1) Feb 23 03:12:49 localhost podman[75347]: 2026-02-23 08:12:49.519837166 +0000 UTC m=+0.272334461 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:12:49 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:12:50 localhost puppet-user[75289]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 23 03:12:50 localhost puppet-user[75289]: (file: /etc/puppet/hiera.yaml) Feb 23 03:12:50 localhost puppet-user[75289]: Warning: Undefined variable '::deploy_config_name'; Feb 23 03:12:50 localhost puppet-user[75289]: (file & line not available) Feb 23 03:12:50 localhost puppet-user[75289]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 23 03:12:50 localhost puppet-user[75289]: (file & line not available) Feb 23 03:12:50 localhost puppet-user[75289]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 23 03:12:50 localhost puppet-user[75289]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:12:50 localhost puppet-user[75289]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:12:50 localhost puppet-user[75289]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 23 03:12:50 localhost puppet-user[75289]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:12:50 localhost puppet-user[75289]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:12:50 localhost puppet-user[75289]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 23 03:12:50 localhost puppet-user[75289]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:12:50 localhost puppet-user[75289]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:12:50 localhost puppet-user[75289]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 23 03:12:50 localhost puppet-user[75289]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:12:50 localhost puppet-user[75289]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:12:50 localhost puppet-user[75289]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 23 03:12:50 localhost puppet-user[75289]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:12:50 localhost puppet-user[75289]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:12:50 localhost puppet-user[75289]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 23 03:12:50 localhost puppet-user[75289]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 23 03:12:50 localhost puppet-user[75289]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 23 03:12:50 localhost puppet-user[75289]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 23 03:12:50 localhost puppet-user[75289]: Notice: Compiled catalog for np0005626465.localdomain in environment production in 0.21 seconds Feb 23 03:12:50 localhost puppet-user[75289]: Notice: Applied catalog in 0.30 seconds Feb 23 03:12:50 localhost puppet-user[75289]: Application: Feb 23 03:12:50 localhost puppet-user[75289]: Initial environment: production Feb 23 03:12:50 localhost puppet-user[75289]: Converged environment: production Feb 23 03:12:50 localhost puppet-user[75289]: Run mode: user Feb 23 03:12:50 localhost puppet-user[75289]: Changes: Feb 23 03:12:50 localhost puppet-user[75289]: Events: Feb 23 03:12:50 localhost puppet-user[75289]: Resources: Feb 23 03:12:50 localhost puppet-user[75289]: Total: 19 Feb 23 03:12:50 localhost puppet-user[75289]: Time: Feb 23 03:12:50 localhost puppet-user[75289]: Package: 0.00 Feb 23 03:12:50 localhost puppet-user[75289]: Schedule: 0.00 Feb 23 03:12:50 localhost puppet-user[75289]: Exec: 0.01 Feb 23 03:12:50 localhost puppet-user[75289]: Augeas: 0.01 Feb 23 03:12:50 localhost puppet-user[75289]: File: 0.02 Feb 23 03:12:50 localhost puppet-user[75289]: Service: 0.05 Feb 23 03:12:50 localhost puppet-user[75289]: Config retrieval: 0.28 Feb 23 03:12:50 localhost puppet-user[75289]: Transaction evaluation: 0.29 Feb 23 03:12:50 localhost puppet-user[75289]: Catalog application: 0.30 Feb 23 03:12:50 localhost puppet-user[75289]: Last run: 1771834370 Feb 23 03:12:50 localhost puppet-user[75289]: Filebucket: 0.00 Feb 23 03:12:50 localhost puppet-user[75289]: Total: 0.30 Feb 23 03:12:50 localhost puppet-user[75289]: Version: Feb 23 03:12:50 localhost puppet-user[75289]: Config: 1771834370 Feb 23 03:12:50 localhost puppet-user[75289]: Puppet: 7.10.0 Feb 23 03:12:51 localhost ansible-async_wrapper.py[75270]: Module complete (75270) Feb 23 03:12:51 localhost ansible-async_wrapper.py[75269]: Done in kid B. Feb 23 03:12:57 localhost python3[75457]: ansible-ansible.legacy.async_status Invoked with jid=141745225641.75219 mode=status _async_dir=/tmp/.ansible_async Feb 23 03:12:58 localhost python3[75473]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 03:12:58 localhost python3[75489]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:12:59 localhost python3[75539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:12:59 localhost python3[75557]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpcnezcyy7 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 23 03:13:00 localhost python3[75587]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:13:01 localhost python3[75692]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 23 03:13:02 localhost python3[75711]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:13:03 localhost python3[75743]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:13:04 localhost python3[75793]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:13:04 localhost python3[75811]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:13:04 localhost python3[75873]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:13:05 localhost python3[75891]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:13:05 localhost python3[75953]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:13:05 localhost python3[75971]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:13:06 localhost python3[76033]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:13:06 localhost python3[76051]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:13:07 localhost python3[76081]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:13:07 localhost systemd[1]: Reloading. Feb 23 03:13:07 localhost systemd-rc-local-generator[76107]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:13:07 localhost systemd-sysv-generator[76112]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:13:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:13:08 localhost python3[76167]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:13:08 localhost python3[76185]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:13:08 localhost python3[76247]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 23 03:13:09 localhost python3[76265]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:13:09 localhost python3[76295]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:13:09 localhost systemd[1]: Reloading. Feb 23 03:13:09 localhost systemd-rc-local-generator[76318]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:13:09 localhost systemd-sysv-generator[76321]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:13:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:13:10 localhost systemd[1]: Starting Create netns directory... Feb 23 03:13:10 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 23 03:13:10 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 23 03:13:10 localhost systemd[1]: Finished Create netns directory. Feb 23 03:13:11 localhost python3[76351]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:13:13 localhost systemd[1]: tmp-crun.4RMiuk.mount: Deactivated successfully. Feb 23 03:13:13 localhost podman[76395]: 2026-02-23 08:13:13.031075019 +0000 UTC m=+0.099734563 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, architecture=x86_64, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true) Feb 23 03:13:13 localhost podman[76395]: 2026-02-23 08:13:13.036023612 +0000 UTC m=+0.104683176 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, tcib_managed=true, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13) Feb 23 03:13:13 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:13:13 localhost podman[76396]: 2026-02-23 08:13:13.08723205 +0000 UTC m=+0.149476596 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:13:13 localhost podman[76397]: 2026-02-23 08:13:13.147681362 +0000 UTC m=+0.206424931 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, version=17.1.13, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com) Feb 23 03:13:13 localhost podman[76394]: 2026-02-23 08:13:13.010408383 +0000 UTC m=+0.081004767 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:13:13 localhost podman[76396]: 2026-02-23 08:13:13.171724502 +0000 UTC m=+0.233969118 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Feb 23 03:13:13 localhost podman[76397]: 2026-02-23 08:13:13.18268938 +0000 UTC m=+0.241432929 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, url=https://www.redhat.com, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 23 03:13:13 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:13:13 localhost podman[76394]: 2026-02-23 08:13:13.197194347 +0000 UTC m=+0.267790791 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510) Feb 23 03:13:13 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:13:13 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:13:13 localhost python3[76498]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 23 03:13:13 localhost podman[76536]: 2026-02-23 08:13:13.891721305 +0000 UTC m=+0.072919118 container create 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, version=17.1.13, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:13:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:13:13 localhost systemd[1]: Started libpod-conmon-87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.scope. Feb 23 03:13:13 localhost podman[76536]: 2026-02-23 08:13:13.857381767 +0000 UTC m=+0.038579630 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 23 03:13:13 localhost systemd[1]: Started libcrun container. Feb 23 03:13:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:13:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34739f95a235983f1fb7239ff664b6ee8463858b2ebd021927c4c756cf80d140/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 03:13:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34739f95a235983f1fb7239ff664b6ee8463858b2ebd021927c4c756cf80d140/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:13:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34739f95a235983f1fb7239ff664b6ee8463858b2ebd021927c4c756cf80d140/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:13:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34739f95a235983f1fb7239ff664b6ee8463858b2ebd021927c4c756cf80d140/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 23 03:13:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34739f95a235983f1fb7239ff664b6ee8463858b2ebd021927c4c756cf80d140/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 23 03:13:14 localhost systemd[1]: tmp-crun.8dOMf4.mount: Deactivated successfully. Feb 23 03:13:14 localhost podman[76565]: 2026-02-23 08:13:14.053643084 +0000 UTC m=+0.076653213 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:13:14 localhost podman[76536]: 2026-02-23 08:13:14.058574326 +0000 UTC m=+0.239772209 container init 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:13:14 localhost podman[76550]: 2026-02-23 08:13:14.021564516 +0000 UTC m=+0.083010999 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, version=17.1.13, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:13:14 localhost podman[76536]: 2026-02-23 08:13:14.095150843 +0000 UTC m=+0.276348656 container start 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.buildah.version=1.41.5, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:13:14 localhost systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring. Feb 23 03:13:14 localhost python3[76498]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 23 03:13:14 localhost systemd[1]: Created slice User Slice of UID 0. Feb 23 03:13:14 localhost podman[76550]: 2026-02-23 08:13:14.108863105 +0000 UTC m=+0.170309578 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, container_name=iscsid, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 23 03:13:14 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 23 03:13:14 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:13:14 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 23 03:13:14 localhost systemd[1]: Starting User Manager for UID 0... Feb 23 03:13:14 localhost podman[76595]: 2026-02-23 08:13:14.196786684 +0000 UTC m=+0.090451418 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Feb 23 03:13:14 localhost podman[76595]: 2026-02-23 08:13:14.241760039 +0000 UTC m=+0.135424753 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, container_name=nova_compute) Feb 23 03:13:14 localhost podman[76595]: unhealthy Feb 23 03:13:14 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:13:14 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 03:13:14 localhost systemd[76610]: Queued start job for default target Main User Target. Feb 23 03:13:14 localhost systemd[76610]: Created slice User Application Slice. Feb 23 03:13:14 localhost systemd[76610]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 23 03:13:14 localhost systemd[76610]: Started Daily Cleanup of User's Temporary Directories. Feb 23 03:13:14 localhost systemd[76610]: Reached target Paths. Feb 23 03:13:14 localhost systemd[76610]: Reached target Timers. Feb 23 03:13:14 localhost systemd[76610]: Starting D-Bus User Message Bus Socket... Feb 23 03:13:14 localhost systemd[76610]: Starting Create User's Volatile Files and Directories... Feb 23 03:13:14 localhost systemd[76610]: Listening on D-Bus User Message Bus Socket. Feb 23 03:13:14 localhost systemd[76610]: Reached target Sockets. Feb 23 03:13:14 localhost systemd[76610]: Finished Create User's Volatile Files and Directories. Feb 23 03:13:14 localhost systemd[76610]: Reached target Basic System. Feb 23 03:13:14 localhost systemd[76610]: Reached target Main User Target. Feb 23 03:13:14 localhost systemd[76610]: Startup finished in 130ms. Feb 23 03:13:14 localhost systemd[1]: Started User Manager for UID 0. Feb 23 03:13:14 localhost systemd[1]: Started Session c10 of User root. Feb 23 03:13:14 localhost systemd[1]: session-c10.scope: Deactivated successfully. Feb 23 03:13:14 localhost podman[76565]: 2026-02-23 08:13:14.41770044 +0000 UTC m=+0.440710539 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:13:14 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:13:14 localhost podman[76699]: 2026-02-23 08:13:14.502911876 +0000 UTC m=+0.048688232 container create 4159e7dba2b21c2922ed3e54a2f6cf8769264134505a634abba4473c26468e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:13:14 localhost systemd[1]: Started libpod-conmon-4159e7dba2b21c2922ed3e54a2f6cf8769264134505a634abba4473c26468e87.scope. Feb 23 03:13:14 localhost systemd[1]: Started libcrun container. Feb 23 03:13:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32150165319e4041e46aa8b7e12f6ac7177181910ca12f0c93249648b47050ee/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Feb 23 03:13:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32150165319e4041e46aa8b7e12f6ac7177181910ca12f0c93249648b47050ee/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Feb 23 03:13:14 localhost podman[76699]: 2026-02-23 08:13:14.561091117 +0000 UTC m=+0.106867473 container init 4159e7dba2b21c2922ed3e54a2f6cf8769264134505a634abba4473c26468e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, url=https://www.redhat.com, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 03:13:14 localhost podman[76699]: 2026-02-23 08:13:14.566978189 +0000 UTC m=+0.112754545 container start 4159e7dba2b21c2922ed3e54a2f6cf8769264134505a634abba4473c26468e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=nova_wait_for_compute_service, config_id=tripleo_step5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, distribution-scope=public, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 23 03:13:14 localhost podman[76699]: 2026-02-23 08:13:14.567129963 +0000 UTC m=+0.112906309 container attach 4159e7dba2b21c2922ed3e54a2f6cf8769264134505a634abba4473c26468e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, container_name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, architecture=x86_64, io.buildah.version=1.41.5, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, managed_by=tripleo_ansible, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:13:14 localhost podman[76699]: 2026-02-23 08:13:14.477684288 +0000 UTC m=+0.023460664 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 23 03:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:13:17 localhost systemd[1]: tmp-crun.XCjD57.mount: Deactivated successfully. Feb 23 03:13:17 localhost podman[76722]: 2026-02-23 08:13:17.021904603 +0000 UTC m=+0.093561294 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:13:17 localhost podman[76722]: 2026-02-23 08:13:17.041976991 +0000 UTC m=+0.113633722 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.13) Feb 23 03:13:17 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:13:17 localhost podman[76723]: 2026-02-23 08:13:17.057902662 +0000 UTC m=+0.130067439 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, release=1766032510) Feb 23 03:13:17 localhost podman[76723]: 2026-02-23 08:13:17.087592096 +0000 UTC m=+0.159756873 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:13:17 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:13:18 localhost systemd[1]: tmp-crun.BqIffl.mount: Deactivated successfully. Feb 23 03:13:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:13:19 localhost systemd[1]: tmp-crun.TvOJDu.mount: Deactivated successfully. Feb 23 03:13:19 localhost podman[76770]: 2026-02-23 08:13:19.996551428 +0000 UTC m=+0.073946350 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, config_id=tripleo_step1, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13) Feb 23 03:13:20 localhost podman[76770]: 2026-02-23 08:13:20.221949892 +0000 UTC m=+0.299344824 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=metrics_qdr, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:13:20 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:13:24 localhost systemd[1]: Stopping User Manager for UID 0... Feb 23 03:13:24 localhost systemd[76610]: Activating special unit Exit the Session... Feb 23 03:13:24 localhost systemd[76610]: Stopped target Main User Target. Feb 23 03:13:24 localhost systemd[76610]: Stopped target Basic System. Feb 23 03:13:24 localhost systemd[76610]: Stopped target Paths. Feb 23 03:13:24 localhost systemd[76610]: Stopped target Sockets. Feb 23 03:13:24 localhost systemd[76610]: Stopped target Timers. Feb 23 03:13:24 localhost systemd[76610]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 03:13:24 localhost systemd[76610]: Closed D-Bus User Message Bus Socket. Feb 23 03:13:24 localhost systemd[76610]: Stopped Create User's Volatile Files and Directories. Feb 23 03:13:24 localhost systemd[76610]: Removed slice User Application Slice. Feb 23 03:13:24 localhost systemd[76610]: Reached target Shutdown. Feb 23 03:13:24 localhost systemd[76610]: Finished Exit the Session. Feb 23 03:13:24 localhost systemd[76610]: Reached target Exit the Session. Feb 23 03:13:24 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 23 03:13:24 localhost systemd[1]: Stopped User Manager for UID 0. Feb 23 03:13:24 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 23 03:13:24 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 23 03:13:24 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 23 03:13:24 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 23 03:13:24 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 23 03:13:36 localhost sshd[76879]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:13:44 localhost podman[76883]: 2026-02-23 08:13:44.019897439 +0000 UTC m=+0.086482915 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:13:44 localhost systemd[1]: tmp-crun.uKYiCW.mount: Deactivated successfully. Feb 23 03:13:44 localhost podman[76883]: 2026-02-23 08:13:44.070293712 +0000 UTC m=+0.136879208 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.) Feb 23 03:13:44 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:13:44 localhost podman[76882]: 2026-02-23 08:13:44.073070618 +0000 UTC m=+0.137647722 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, container_name=logrotate_crond, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, vcs-type=git, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:13:44 localhost podman[76889]: 2026-02-23 08:13:44.128343151 +0000 UTC m=+0.187768836 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc.) Feb 23 03:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:13:44 localhost podman[76881]: 2026-02-23 08:13:44.185702777 +0000 UTC m=+0.256021908 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_step3, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git) Feb 23 03:13:44 localhost podman[76881]: 2026-02-23 08:13:44.198773 +0000 UTC m=+0.269092181 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=, container_name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5) Feb 23 03:13:44 localhost podman[76882]: 2026-02-23 08:13:44.207066826 +0000 UTC m=+0.271643970 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com) Feb 23 03:13:44 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:13:44 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:13:44 localhost podman[76889]: 2026-02-23 08:13:44.239662801 +0000 UTC m=+0.299088536 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1766032510, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:13:44 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:13:44 localhost podman[76956]: 2026-02-23 08:13:44.296521132 +0000 UTC m=+0.139195110 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, container_name=iscsid, distribution-scope=public, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 23 03:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:13:44 localhost podman[76956]: 2026-02-23 08:13:44.335713879 +0000 UTC m=+0.178387837 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 23 03:13:44 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:13:44 localhost podman[76994]: 2026-02-23 08:13:44.411175564 +0000 UTC m=+0.076636652 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1766032510, com.redhat.component=openstack-nova-compute-container) Feb 23 03:13:44 localhost podman[76994]: 2026-02-23 08:13:44.469759649 +0000 UTC m=+0.135220777 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step5, tcib_managed=true, container_name=nova_compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:13:44 localhost podman[76994]: unhealthy Feb 23 03:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:13:44 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:13:44 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 03:13:44 localhost podman[77017]: 2026-02-23 08:13:44.575254999 +0000 UTC m=+0.076159457 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.buildah.version=1.41.5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:13:44 localhost podman[77017]: 2026-02-23 08:13:44.947798057 +0000 UTC m=+0.448702515 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 03:13:44 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:13:47 localhost podman[77040]: 2026-02-23 08:13:47.983727779 +0000 UTC m=+0.061475175 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.buildah.version=1.41.5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, architecture=x86_64, url=https://www.redhat.com) Feb 23 03:13:48 localhost podman[77040]: 2026-02-23 08:13:48.006982715 +0000 UTC m=+0.084730161 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-type=git, config_id=tripleo_step4, batch=17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true) Feb 23 03:13:48 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:13:48 localhost podman[77041]: 2026-02-23 08:13:48.147419032 +0000 UTC m=+0.217720428 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public) Feb 23 03:13:48 localhost podman[77041]: 2026-02-23 08:13:48.21582458 +0000 UTC m=+0.286126006 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, architecture=x86_64) Feb 23 03:13:48 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:13:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:13:51 localhost podman[77088]: 2026-02-23 08:13:51.000461971 +0000 UTC m=+0.077601543 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.buildah.version=1.41.5, release=1766032510) Feb 23 03:13:51 localhost podman[77088]: 2026-02-23 08:13:51.215862867 +0000 UTC m=+0.293002429 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:13:51 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:14:01 localhost systemd[1]: session-27.scope: Deactivated successfully. Feb 23 03:14:01 localhost systemd[1]: session-27.scope: Consumed 2.924s CPU time. Feb 23 03:14:01 localhost systemd-logind[759]: Session 27 logged out. Waiting for processes to exit. Feb 23 03:14:01 localhost systemd-logind[759]: Removed session 27. Feb 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:14:14 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:14:14 localhost recover_tripleo_nova_virtqemud[77151]: 62457 Feb 23 03:14:14 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:14:14 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:14:15 localhost systemd[1]: tmp-crun.PO7d2b.mount: Deactivated successfully. Feb 23 03:14:15 localhost podman[77128]: 2026-02-23 08:14:15.058078948 +0000 UTC m=+0.114539320 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:14:15 localhost podman[77117]: 2026-02-23 08:14:15.066948231 +0000 UTC m=+0.140263752 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 23 03:14:15 localhost podman[77124]: 2026-02-23 08:14:15.021079187 +0000 UTC m=+0.089982013 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:14:15 localhost podman[77128]: 2026-02-23 08:14:15.077545308 +0000 UTC m=+0.134005680 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1766032510) Feb 23 03:14:15 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:14:15 localhost podman[77178]: 2026-02-23 08:14:15.13057241 +0000 UTC m=+0.138479477 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:14:15 localhost podman[77119]: 2026-02-23 08:14:15.151483005 +0000 UTC m=+0.223572889 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, batch=17.1_20260112.1, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:14:15 localhost podman[77117]: 2026-02-23 08:14:15.150199985 +0000 UTC m=+0.223515536 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:14:15 localhost podman[77118]: 2026-02-23 08:14:15.161474103 +0000 UTC m=+0.238721026 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, batch=17.1_20260112.1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, architecture=x86_64, version=17.1.13, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git) Feb 23 03:14:15 localhost podman[77124]: 2026-02-23 08:14:15.20420512 +0000 UTC m=+0.273107956 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:14:15 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:14:15 localhost podman[77120]: 2026-02-23 08:14:15.215348613 +0000 UTC m=+0.282325659 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc.) Feb 23 03:14:15 localhost podman[77120]: 2026-02-23 08:14:15.22758567 +0000 UTC m=+0.294562716 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.13, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, batch=17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:14:15 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:14:15 localhost podman[77118]: 2026-02-23 08:14:15.247620677 +0000 UTC m=+0.324867620 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, container_name=iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5) Feb 23 03:14:15 localhost podman[77119]: 2026-02-23 08:14:15.259675109 +0000 UTC m=+0.331764993 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:14:15 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:14:15 localhost podman[77119]: unhealthy Feb 23 03:14:15 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:14:15 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 03:14:15 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:14:15 localhost podman[77178]: 2026-02-23 08:14:15.449905829 +0000 UTC m=+0.457812986 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, release=1766032510, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:14:15 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:14:16 localhost systemd[1]: tmp-crun.udSHAA.mount: Deactivated successfully. Feb 23 03:14:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:14:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:14:19 localhost systemd[1]: tmp-crun.bn2Ueo.mount: Deactivated successfully. Feb 23 03:14:19 localhost podman[77270]: 2026-02-23 08:14:19.008719212 +0000 UTC m=+0.083928577 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:14:19 localhost podman[77271]: 2026-02-23 08:14:19.061624852 +0000 UTC m=+0.133234426 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4) Feb 23 03:14:19 localhost podman[77270]: 2026-02-23 08:14:19.084745164 +0000 UTC m=+0.159954549 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:14:19 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:14:19 localhost podman[77271]: 2026-02-23 08:14:19.111899481 +0000 UTC m=+0.183509035 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:14:19 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:14:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:14:22 localhost systemd[1]: tmp-crun.SXt4CB.mount: Deactivated successfully. Feb 23 03:14:22 localhost podman[77318]: 2026-02-23 08:14:22.023665558 +0000 UTC m=+0.097303499 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 23 03:14:22 localhost podman[77318]: 2026-02-23 08:14:22.230033026 +0000 UTC m=+0.303670967 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13) Feb 23 03:14:22 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:14:26 localhost sshd[77347]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:14:46 localhost systemd[1]: tmp-crun.8rz7Ba.mount: Deactivated successfully. Feb 23 03:14:46 localhost podman[77426]: 2026-02-23 08:14:46.058196044 +0000 UTC m=+0.124516578 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:14:46 localhost systemd[1]: tmp-crun.6yfP8k.mount: Deactivated successfully. Feb 23 03:14:46 localhost podman[77444]: 2026-02-23 08:14:46.213588511 +0000 UTC m=+0.260379793 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, container_name=ceilometer_agent_compute, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:14:46 localhost podman[77427]: 2026-02-23 08:14:46.171947378 +0000 UTC m=+0.236140406 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, release=1766032510, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, container_name=iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 23 03:14:46 localhost podman[77428]: 2026-02-23 08:14:46.186693732 +0000 UTC m=+0.240443328 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:14:46 localhost podman[77427]: 2026-02-23 08:14:46.250263561 +0000 UTC m=+0.314456579 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:14:46 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:14:46 localhost podman[77444]: 2026-02-23 08:14:46.25899594 +0000 UTC m=+0.305787202 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:14:46 localhost podman[77428]: 2026-02-23 08:14:46.274782186 +0000 UTC m=+0.328531672 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, container_name=nova_compute, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510) Feb 23 03:14:46 localhost podman[77428]: unhealthy Feb 23 03:14:46 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:14:46 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:14:46 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 03:14:46 localhost podman[77425]: 2026-02-23 08:14:46.254162341 +0000 UTC m=+0.320324180 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, vcs-type=git, name=rhosp-rhel9/openstack-collectd, container_name=collectd, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:14:46 localhost podman[77448]: 2026-02-23 08:14:46.310618501 +0000 UTC m=+0.354726740 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:14:46 localhost podman[77429]: 2026-02-23 08:14:46.124674311 +0000 UTC m=+0.178042325 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, batch=17.1_20260112.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, vcs-type=git) Feb 23 03:14:46 localhost podman[77448]: 2026-02-23 08:14:46.337825748 +0000 UTC m=+0.381933987 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Feb 23 03:14:46 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:14:46 localhost podman[77429]: 2026-02-23 08:14:46.361703754 +0000 UTC m=+0.415071698 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, version=17.1.13, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, release=1766032510, container_name=logrotate_crond, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:14:46 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:14:46 localhost podman[77425]: 2026-02-23 08:14:46.391236494 +0000 UTC m=+0.457398263 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z) Feb 23 03:14:46 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:14:46 localhost podman[77426]: 2026-02-23 08:14:46.424909901 +0000 UTC m=+0.491230465 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, release=1766032510, build-date=2026-01-12T23:32:04Z) Feb 23 03:14:46 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:14:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:14:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:14:50 localhost podman[77577]: 2026-02-23 08:14:50.013955236 +0000 UTC m=+0.083129023 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:14:50 localhost podman[77577]: 2026-02-23 08:14:50.056760404 +0000 UTC m=+0.125934181 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=ovn_metadata_agent, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:14:50 localhost systemd[1]: tmp-crun.d1PVXs.mount: Deactivated successfully. Feb 23 03:14:50 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:14:50 localhost podman[77576]: 2026-02-23 08:14:50.074596093 +0000 UTC m=+0.145566176 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=ovn_controller) Feb 23 03:14:50 localhost podman[77576]: 2026-02-23 08:14:50.102654688 +0000 UTC m=+0.173624801 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13) Feb 23 03:14:50 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:14:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:14:53 localhost podman[77624]: 2026-02-23 08:14:53.011052541 +0000 UTC m=+0.086958780 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510) Feb 23 03:14:53 localhost podman[77624]: 2026-02-23 08:14:53.206344868 +0000 UTC m=+0.282251047 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5) Feb 23 03:14:53 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:15:15 localhost sshd[77655]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:15:17 localhost systemd[1]: tmp-crun.QHM2RH.mount: Deactivated successfully. Feb 23 03:15:17 localhost podman[77659]: 2026-02-23 08:15:17.035413074 +0000 UTC m=+0.097885007 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, container_name=iscsid, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:15:17 localhost podman[77659]: 2026-02-23 08:15:17.041608665 +0000 UTC m=+0.104080588 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc.) Feb 23 03:15:17 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:15:17 localhost podman[77677]: 2026-02-23 08:15:17.050641003 +0000 UTC m=+0.094372718 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=) Feb 23 03:15:17 localhost podman[77677]: 2026-02-23 08:15:17.099537519 +0000 UTC m=+0.143269244 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:15:17 localhost podman[77664]: 2026-02-23 08:15:17.102037807 +0000 UTC m=+0.150954622 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:15:17 localhost podman[77658]: 2026-02-23 08:15:17.133183796 +0000 UTC m=+0.196579037 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.openshift.expose-services=, version=17.1.13) Feb 23 03:15:17 localhost podman[77664]: 2026-02-23 08:15:17.135639562 +0000 UTC m=+0.184556367 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, vcs-type=git, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 23 03:15:17 localhost podman[77664]: unhealthy Feb 23 03:15:17 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:15:17 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 03:15:17 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:15:17 localhost podman[77684]: 2026-02-23 08:15:17.086671604 +0000 UTC m=+0.126764738 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com) Feb 23 03:15:17 localhost podman[77684]: 2026-02-23 08:15:17.219867857 +0000 UTC m=+0.259960981 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, release=1766032510, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:15:17 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:15:17 localhost podman[77657]: 2026-02-23 08:15:17.189506541 +0000 UTC m=+0.256810682 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64) Feb 23 03:15:17 localhost podman[77657]: 2026-02-23 08:15:17.272759626 +0000 UTC m=+0.340063777 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, container_name=collectd, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container) Feb 23 03:15:17 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:15:17 localhost podman[77675]: 2026-02-23 08:15:17.224377206 +0000 UTC m=+0.278789421 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, container_name=logrotate_crond, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:15:17 localhost podman[77675]: 2026-02-23 08:15:17.354305499 +0000 UTC m=+0.408717774 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4) Feb 23 03:15:17 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:15:17 localhost podman[77658]: 2026-02-23 08:15:17.470083756 +0000 UTC m=+0.533479057 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:15:17 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:15:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:15:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:15:20 localhost podman[77804]: 2026-02-23 08:15:20.994736226 +0000 UTC m=+0.073743593 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent) Feb 23 03:15:21 localhost systemd[1]: tmp-crun.AwrCec.mount: Deactivated successfully. Feb 23 03:15:21 localhost podman[77804]: 2026-02-23 08:15:21.047920025 +0000 UTC m=+0.126927332 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, config_id=tripleo_step4) Feb 23 03:15:21 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:15:21 localhost podman[77803]: 2026-02-23 08:15:21.052475165 +0000 UTC m=+0.129856722 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, tcib_managed=true, build-date=2026-01-12T22:36:40Z, architecture=x86_64) Feb 23 03:15:21 localhost podman[77803]: 2026-02-23 08:15:21.135894585 +0000 UTC m=+0.213276152 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z) Feb 23 03:15:21 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:15:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:15:23 localhost podman[77849]: 2026-02-23 08:15:23.962074745 +0000 UTC m=+0.045417100 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, version=17.1.13) Feb 23 03:15:24 localhost podman[77849]: 2026-02-23 08:15:24.130728311 +0000 UTC m=+0.214070686 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 23 03:15:24 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:15:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:15:48 localhost recover_tripleo_nova_virtqemud[78002]: 62457 Feb 23 03:15:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:15:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:15:48 localhost podman[77975]: 2026-02-23 08:15:48.402344189 +0000 UTC m=+0.132499373 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:15:48 localhost podman[77955]: 2026-02-23 08:15:48.361016415 +0000 UTC m=+0.105603824 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true) Feb 23 03:15:48 localhost podman[77975]: 2026-02-23 08:15:48.450749191 +0000 UTC m=+0.180904435 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.13, config_id=tripleo_step4, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:15:48 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:15:48 localhost podman[77986]: 2026-02-23 08:15:48.464108852 +0000 UTC m=+0.189630213 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 23 03:15:48 localhost podman[77968]: 2026-02-23 08:15:48.333611122 +0000 UTC m=+0.072465624 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z) Feb 23 03:15:48 localhost podman[77956]: 2026-02-23 08:15:48.386871973 +0000 UTC m=+0.131772111 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, vendor=Red Hat, Inc.) Feb 23 03:15:48 localhost podman[77986]: 2026-02-23 08:15:48.492733044 +0000 UTC m=+0.218254415 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:15:48 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:15:48 localhost podman[77968]: 2026-02-23 08:15:48.516758064 +0000 UTC m=+0.255612556 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.openshift.expose-services=, release=1766032510, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:15:48 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:15:48 localhost podman[77955]: 2026-02-23 08:15:48.540534657 +0000 UTC m=+0.285122016 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:15:48 localhost podman[77957]: 2026-02-23 08:15:48.445910881 +0000 UTC m=+0.188087595 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=iscsid, tcib_managed=true, config_id=tripleo_step3, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 23 03:15:48 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:15:48 localhost podman[77957]: 2026-02-23 08:15:48.575639468 +0000 UTC m=+0.317816142 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:15:48 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:15:48 localhost podman[77958]: 2026-02-23 08:15:48.617095225 +0000 UTC m=+0.355504774 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, vcs-type=git, config_id=tripleo_step5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 23 03:15:48 localhost podman[77958]: 2026-02-23 08:15:48.655703445 +0000 UTC m=+0.394112994 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Feb 23 03:15:48 localhost podman[77958]: unhealthy Feb 23 03:15:48 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:15:48 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 03:15:48 localhost podman[77956]: 2026-02-23 08:15:48.716789226 +0000 UTC m=+0.461689334 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, container_name=nova_migration_target, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:15:48 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:15:49 localhost systemd[1]: tmp-crun.Mri1xD.mount: Deactivated successfully. Feb 23 03:15:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:15:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:15:51 localhost podman[78108]: 2026-02-23 08:15:51.993516718 +0000 UTC m=+0.068262353 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1766032510, io.buildah.version=1.41.5, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent) Feb 23 03:15:52 localhost podman[78108]: 2026-02-23 08:15:52.029650922 +0000 UTC m=+0.104396537 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, managed_by=tripleo_ansible) Feb 23 03:15:52 localhost systemd[1]: tmp-crun.2a5mcw.mount: Deactivated successfully. Feb 23 03:15:52 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:15:52 localhost podman[78107]: 2026-02-23 08:15:52.055334173 +0000 UTC m=+0.128547772 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:15:52 localhost podman[78107]: 2026-02-23 08:15:52.105825378 +0000 UTC m=+0.179039047 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, distribution-scope=public, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:15:52 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:15:54 localhost podman[78156]: 2026-02-23 08:15:54.992542965 +0000 UTC m=+0.070790312 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=metrics_qdr, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z) Feb 23 03:15:55 localhost podman[78156]: 2026-02-23 08:15:55.189886215 +0000 UTC m=+0.268133602 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 23 03:15:55 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:16:03 localhost sshd[78185]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:16:18 localhost systemd[1]: tmp-crun.o6KekY.mount: Deactivated successfully. Feb 23 03:16:19 localhost podman[78260]: 2026-02-23 08:16:19.03669364 +0000 UTC m=+0.104659770 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1) Feb 23 03:16:19 localhost podman[78258]: 2026-02-23 08:16:19.002767056 +0000 UTC m=+0.072620113 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, version=17.1.13, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, container_name=nova_migration_target, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5) Feb 23 03:16:19 localhost podman[78260]: 2026-02-23 08:16:19.054088979 +0000 UTC m=+0.122055080 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, config_id=tripleo_step5, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=) Feb 23 03:16:19 localhost podman[78262]: 2026-02-23 08:16:19.092140878 +0000 UTC m=+0.148090742 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, release=1766032510, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2026-01-12T23:07:47Z) Feb 23 03:16:19 localhost podman[78262]: 2026-02-23 08:16:19.112733156 +0000 UTC m=+0.168683010 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, tcib_managed=true, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, managed_by=tripleo_ansible, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:16:19 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:16:19 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:16:19 localhost podman[78259]: 2026-02-23 08:16:19.189596107 +0000 UTC m=+0.257624389 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:16:19 localhost podman[78257]: 2026-02-23 08:16:19.278344732 +0000 UTC m=+0.345152767 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=collectd, distribution-scope=public) Feb 23 03:16:19 localhost podman[78272]: 2026-02-23 08:16:19.290914204 +0000 UTC m=+0.340819784 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, vcs-type=git) Feb 23 03:16:19 localhost podman[78259]: 2026-02-23 08:16:19.307893211 +0000 UTC m=+0.375921573 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:16:19 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:16:19 localhost podman[78261]: 2026-02-23 08:16:19.337358819 +0000 UTC m=+0.394842880 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1) Feb 23 03:16:19 localhost podman[78261]: 2026-02-23 08:16:19.34557849 +0000 UTC m=+0.403062571 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, build-date=2026-01-12T22:10:15Z, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:16:19 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:16:19 localhost podman[78257]: 2026-02-23 08:16:19.360496644 +0000 UTC m=+0.427304669 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, release=1766032510, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Feb 23 03:16:19 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:16:19 localhost podman[78258]: 2026-02-23 08:16:19.375684677 +0000 UTC m=+0.445537754 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, architecture=x86_64, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5) Feb 23 03:16:19 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:16:19 localhost podman[78272]: 2026-02-23 08:16:19.411450026 +0000 UTC m=+0.461355616 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:16:19 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:16:20 localhost systemd[1]: tmp-crun.cgz1Nq.mount: Deactivated successfully. Feb 23 03:16:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:16:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:16:23 localhost podman[78438]: 2026-02-23 08:16:23.038990723 +0000 UTC m=+0.102888785 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 23 03:16:23 localhost podman[78438]: 2026-02-23 08:16:23.067695128 +0000 UTC m=+0.131593200 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., container_name=ovn_controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:16:23 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:16:23 localhost podman[78439]: 2026-02-23 08:16:23.092526104 +0000 UTC m=+0.156667394 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:16:23 localhost podman[78439]: 2026-02-23 08:16:23.14000426 +0000 UTC m=+0.204145570 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent) Feb 23 03:16:23 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:16:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:16:26 localhost podman[78487]: 2026-02-23 08:16:26.017083232 +0000 UTC m=+0.088172857 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, release=1766032510, vcs-type=git) Feb 23 03:16:26 localhost podman[78487]: 2026-02-23 08:16:26.220942573 +0000 UTC m=+0.292032158 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5) Feb 23 03:16:26 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:16:26 localhost systemd[1]: libpod-4159e7dba2b21c2922ed3e54a2f6cf8769264134505a634abba4473c26468e87.scope: Deactivated successfully. Feb 23 03:16:26 localhost podman[78516]: 2026-02-23 08:16:26.659734902 +0000 UTC m=+0.049064737 container died 4159e7dba2b21c2922ed3e54a2f6cf8769264134505a634abba4473c26468e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, container_name=nova_wait_for_compute_service) Feb 23 03:16:26 localhost systemd[1]: tmp-crun.hs42eV.mount: Deactivated successfully. Feb 23 03:16:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4159e7dba2b21c2922ed3e54a2f6cf8769264134505a634abba4473c26468e87-userdata-shm.mount: Deactivated successfully. Feb 23 03:16:26 localhost podman[78516]: 2026-02-23 08:16:26.701157224 +0000 UTC m=+0.090486999 container cleanup 4159e7dba2b21c2922ed3e54a2f6cf8769264134505a634abba4473c26468e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:16:26 localhost systemd[1]: libpod-conmon-4159e7dba2b21c2922ed3e54a2f6cf8769264134505a634abba4473c26468e87.scope: Deactivated successfully. Feb 23 03:16:26 localhost python3[76498]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=d8e86b11aed37635c57249fefb951044 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 23 03:16:27 localhost systemd[1]: var-lib-containers-storage-overlay-32150165319e4041e46aa8b7e12f6ac7177181910ca12f0c93249648b47050ee-merged.mount: Deactivated successfully. Feb 23 03:16:27 localhost python3[78568]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:16:27 localhost python3[78584]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 23 03:16:28 localhost python3[78645]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771834587.684706-118909-36409919022750/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:16:28 localhost python3[78661]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 03:16:28 localhost systemd[1]: Reloading. Feb 23 03:16:28 localhost systemd-sysv-generator[78686]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:16:28 localhost systemd-rc-local-generator[78683]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:16:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:16:29 localhost python3[78712]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:16:29 localhost systemd[1]: Reloading. Feb 23 03:16:29 localhost systemd-sysv-generator[78745]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:16:29 localhost systemd-rc-local-generator[78742]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:16:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:16:30 localhost systemd[1]: Starting nova_compute container... Feb 23 03:16:30 localhost tripleo-start-podman-container[78753]: Creating additional drop-in dependency for "nova_compute" (87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea) Feb 23 03:16:30 localhost systemd[1]: Reloading. Feb 23 03:16:30 localhost systemd-sysv-generator[78816]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:16:30 localhost systemd-rc-local-generator[78811]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:16:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:16:30 localhost systemd[1]: Started nova_compute container. Feb 23 03:16:31 localhost python3[78852]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:16:32 localhost python3[78973]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005626465 step=5 update_config_hash_only=False Feb 23 03:16:33 localhost python3[78989]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 03:16:33 localhost python3[79005]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:16:50 localhost systemd[1]: tmp-crun.B6xypT.mount: Deactivated successfully. Feb 23 03:16:50 localhost podman[79087]: 2026-02-23 08:16:50.015256089 +0000 UTC m=+0.081748661 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, container_name=logrotate_crond, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_id=tripleo_step4) Feb 23 03:16:50 localhost systemd[1]: tmp-crun.1eoW7K.mount: Deactivated successfully. Feb 23 03:16:50 localhost podman[79092]: 2026-02-23 08:16:50.035145725 +0000 UTC m=+0.093852481 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible) Feb 23 03:16:50 localhost podman[79087]: 2026-02-23 08:16:50.0527303 +0000 UTC m=+0.119222792 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, vcs-type=git, tcib_managed=true, build-date=2026-01-12T22:10:15Z) Feb 23 03:16:50 localhost podman[79084]: 2026-02-23 08:16:50.061917081 +0000 UTC m=+0.133022544 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_migration_target, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510) Feb 23 03:16:50 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:16:50 localhost podman[79092]: 2026-02-23 08:16:50.077888107 +0000 UTC m=+0.136594893 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, container_name=ceilometer_agent_compute, release=1766032510, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:16:50 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:16:50 localhost podman[79086]: 2026-02-23 08:16:50.128944672 +0000 UTC m=+0.187443981 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Feb 23 03:16:50 localhost podman[79083]: 2026-02-23 08:16:50.165503357 +0000 UTC m=+0.237756405 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, config_id=tripleo_step3, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, container_name=collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:16:50 localhost podman[79086]: 2026-02-23 08:16:50.176961206 +0000 UTC m=+0.235460525 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13) Feb 23 03:16:50 localhost podman[79083]: 2026-02-23 08:16:50.177311836 +0000 UTC m=+0.249564884 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Feb 23 03:16:50 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:16:50 localhost podman[79098]: 2026-02-23 08:16:50.222706879 +0000 UTC m=+0.285187120 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Feb 23 03:16:50 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:16:50 localhost podman[79085]: 2026-02-23 08:16:50.270800794 +0000 UTC m=+0.339722951 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, tcib_managed=true) Feb 23 03:16:50 localhost podman[79098]: 2026-02-23 08:16:50.282806769 +0000 UTC m=+0.345286940 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, release=1766032510) Feb 23 03:16:50 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:16:50 localhost podman[79085]: 2026-02-23 08:16:50.301491509 +0000 UTC m=+0.370413636 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Feb 23 03:16:50 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:16:50 localhost podman[79084]: 2026-02-23 08:16:50.427334463 +0000 UTC m=+0.498439986 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, build-date=2026-01-12T23:32:04Z, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:16:50 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:16:50 localhost sshd[79237]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:16:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:16:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:16:54 localhost podman[79239]: 2026-02-23 08:16:54.021257476 +0000 UTC m=+0.087622190 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public) Feb 23 03:16:54 localhost podman[79240]: 2026-02-23 08:16:54.07093795 +0000 UTC m=+0.136078517 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 23 03:16:54 localhost podman[79239]: 2026-02-23 08:16:54.102581073 +0000 UTC m=+0.168945767 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1766032510, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z) Feb 23 03:16:54 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:16:54 localhost podman[79240]: 2026-02-23 08:16:54.148518173 +0000 UTC m=+0.213658700 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 23 03:16:54 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:16:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:16:57 localhost podman[79283]: 2026-02-23 08:16:57.02507556 +0000 UTC m=+0.092870200 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, url=https://www.redhat.com, container_name=metrics_qdr, release=1766032510, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, architecture=x86_64, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container) Feb 23 03:16:57 localhost podman[79283]: 2026-02-23 08:16:57.221859875 +0000 UTC m=+0.289654465 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:16:57 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:16:59 localhost sshd[79312]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:16:59 localhost systemd-logind[759]: New session 33 of user zuul. Feb 23 03:16:59 localhost systemd[1]: Started Session 33 of User zuul. Feb 23 03:17:00 localhost python3[79421]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 03:17:07 localhost python3[79684]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Feb 23 03:17:15 localhost python3[79777]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Feb 23 03:17:15 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Feb 23 03:17:15 localhost systemd-journald[48305]: Field hash table of /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Feb 23 03:17:15 localhost systemd-journald[48305]: /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 03:17:15 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 03:17:15 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:17:21 localhost podman[79848]: 2026-02-23 08:17:21.01954664 +0000 UTC m=+0.088373313 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.5, batch=17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:17:21 localhost podman[79848]: 2026-02-23 08:17:21.054686121 +0000 UTC m=+0.123512794 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, vcs-type=git, version=17.1.13, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:17:21 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:17:21 localhost podman[79846]: 2026-02-23 08:17:21.067503081 +0000 UTC m=+0.136026965 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:17:21 localhost podman[79846]: 2026-02-23 08:17:21.073868945 +0000 UTC m=+0.142392819 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, batch=17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:17:21 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:17:21 localhost podman[79849]: 2026-02-23 08:17:21.12423635 +0000 UTC m=+0.192346871 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, config_id=tripleo_step5, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:17:21 localhost podman[79871]: 2026-02-23 08:17:21.177157122 +0000 UTC m=+0.235260798 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:17:21 localhost podman[79851]: 2026-02-23 08:17:21.23224576 +0000 UTC m=+0.292317856 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:17:21 localhost podman[79849]: 2026-02-23 08:17:21.249247748 +0000 UTC m=+0.317358309 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, version=17.1.13, vcs-type=git, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:17:21 localhost podman[79847]: 2026-02-23 08:17:21.283016717 +0000 UTC m=+0.352032506 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:17:21 localhost podman[79850]: 2026-02-23 08:17:21.328938436 +0000 UTC m=+0.390150417 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:17:21 localhost podman[79851]: 2026-02-23 08:17:21.335330001 +0000 UTC m=+0.395402107 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:17:21 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:17:21 localhost podman[79850]: 2026-02-23 08:17:21.366979125 +0000 UTC m=+0.428191156 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:17:21 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:17:21 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:17:21 localhost podman[79871]: 2026-02-23 08:17:21.406375695 +0000 UTC m=+0.464479481 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, managed_by=tripleo_ansible) Feb 23 03:17:21 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:17:21 localhost podman[79847]: 2026-02-23 08:17:21.643136948 +0000 UTC m=+0.712152787 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:17:21 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:17:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:17:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:17:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:17:24 localhost recover_tripleo_nova_virtqemud[80018]: 62457 Feb 23 03:17:24 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:17:24 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:17:25 localhost podman[80005]: 2026-02-23 08:17:25.007256328 +0000 UTC m=+0.078857453 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-type=git, release=1766032510, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, version=17.1.13, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:17:25 localhost podman[80005]: 2026-02-23 08:17:25.027798544 +0000 UTC m=+0.099399719 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, container_name=ovn_controller, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., version=17.1.13, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git) Feb 23 03:17:25 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:17:25 localhost podman[80006]: 2026-02-23 08:17:25.115137645 +0000 UTC m=+0.183885413 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:17:25 localhost podman[80006]: 2026-02-23 08:17:25.161970842 +0000 UTC m=+0.230718690 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, tcib_managed=true, io.openshift.expose-services=, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 23 03:17:25 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:17:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:17:28 localhost systemd[1]: tmp-crun.8zOZBl.mount: Deactivated successfully. Feb 23 03:17:28 localhost podman[80055]: 2026-02-23 08:17:28.015754486 +0000 UTC m=+0.088236320 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, version=17.1.13, url=https://www.redhat.com, release=1766032510, container_name=metrics_qdr, distribution-scope=public) Feb 23 03:17:28 localhost podman[80055]: 2026-02-23 08:17:28.192746067 +0000 UTC m=+0.265227881 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step1, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:17:28 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:17:38 localhost sshd[80144]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:17:52 localhost systemd[1]: tmp-crun.ApXoeS.mount: Deactivated successfully. Feb 23 03:17:52 localhost podman[80162]: 2026-02-23 08:17:52.024312916 +0000 UTC m=+0.089628552 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:17:52 localhost podman[80205]: 2026-02-23 08:17:52.073845295 +0000 UTC m=+0.108967681 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 23 03:17:52 localhost podman[80181]: 2026-02-23 08:17:52.026227765 +0000 UTC m=+0.073077768 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, version=17.1.13, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:17:52 localhost podman[80161]: 2026-02-23 08:17:52.057292051 +0000 UTC m=+0.124087232 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 23 03:17:52 localhost podman[80175]: 2026-02-23 08:17:52.112631616 +0000 UTC m=+0.165637867 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:17:52 localhost podman[80175]: 2026-02-23 08:17:52.11961918 +0000 UTC m=+0.172625411 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:17:52 localhost podman[80205]: 2026-02-23 08:17:52.125074706 +0000 UTC m=+0.160197092 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:17:52 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:17:52 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:17:52 localhost podman[80161]: 2026-02-23 08:17:52.135684759 +0000 UTC m=+0.202479940 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13) Feb 23 03:17:52 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:17:52 localhost podman[80181]: 2026-02-23 08:17:52.15901595 +0000 UTC m=+0.205865953 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, release=1766032510, version=17.1.13) Feb 23 03:17:52 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:17:52 localhost podman[80173]: 2026-02-23 08:17:52.159772952 +0000 UTC m=+0.216060783 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, version=17.1.13, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Feb 23 03:17:52 localhost podman[80173]: 2026-02-23 08:17:52.239265315 +0000 UTC m=+0.295553146 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20260112.1, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc.) Feb 23 03:17:52 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:17:52 localhost podman[80163]: 2026-02-23 08:17:52.290249708 +0000 UTC m=+0.349509049 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, version=17.1.13, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 23 03:17:52 localhost podman[80163]: 2026-02-23 08:17:52.299828849 +0000 UTC m=+0.359088220 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Feb 23 03:17:52 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:17:52 localhost podman[80162]: 2026-02-23 08:17:52.353763443 +0000 UTC m=+0.419079079 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, release=1766032510, io.openshift.expose-services=, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:17:52 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:17:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:17:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:17:55 localhost systemd[1]: tmp-crun.zuqZ2r.mount: Deactivated successfully. Feb 23 03:17:55 localhost podman[80321]: 2026-02-23 08:17:55.988338114 +0000 UTC m=+0.067751155 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1766032510, url=https://www.redhat.com, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 23 03:17:56 localhost podman[80322]: 2026-02-23 08:17:56.007534088 +0000 UTC m=+0.083120994 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2026-01-12T22:56:19Z) Feb 23 03:17:56 localhost podman[80321]: 2026-02-23 08:17:56.039895314 +0000 UTC m=+0.119308335 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Feb 23 03:17:56 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:17:56 localhost podman[80322]: 2026-02-23 08:17:56.095276911 +0000 UTC m=+0.170863816 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, tcib_managed=true, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5) Feb 23 03:17:56 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:17:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:17:59 localhost podman[80365]: 2026-02-23 08:17:59.004777252 +0000 UTC m=+0.078155652 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Feb 23 03:17:59 localhost podman[80365]: 2026-02-23 08:17:59.216823081 +0000 UTC m=+0.290201441 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.13, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1) Feb 23 03:17:59 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:18:16 localhost systemd[1]: session-33.scope: Deactivated successfully. Feb 23 03:18:16 localhost systemd[1]: session-33.scope: Consumed 5.928s CPU time. Feb 23 03:18:16 localhost systemd-logind[759]: Session 33 logged out. Waiting for processes to exit. Feb 23 03:18:16 localhost systemd-logind[759]: Removed session 33. Feb 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:18:23 localhost podman[80439]: 2026-02-23 08:18:23.019647355 +0000 UTC m=+0.087843597 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, release=1766032510, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, config_id=tripleo_step4, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:18:23 localhost podman[80438]: 2026-02-23 08:18:23.086949825 +0000 UTC m=+0.160766248 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Feb 23 03:18:23 localhost podman[80438]: 2026-02-23 08:18:23.094869246 +0000 UTC m=+0.168685719 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=) Feb 23 03:18:23 localhost podman[80440]: 2026-02-23 08:18:23.131825932 +0000 UTC m=+0.199236690 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-iscsid-container, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:18:23 localhost podman[80458]: 2026-02-23 08:18:23.144447197 +0000 UTC m=+0.196560359 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, version=17.1.13, architecture=x86_64, build-date=2026-01-12T23:07:47Z, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 03:18:23 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:18:23 localhost podman[80470]: 2026-02-23 08:18:23.188924642 +0000 UTC m=+0.233895227 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team) Feb 23 03:18:23 localhost podman[80440]: 2026-02-23 08:18:23.219241505 +0000 UTC m=+0.286652263 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, release=1766032510, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, container_name=iscsid, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 23 03:18:23 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:18:23 localhost podman[80446]: 2026-02-23 08:18:23.226163997 +0000 UTC m=+0.288245893 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=nova_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.) Feb 23 03:18:23 localhost podman[80446]: 2026-02-23 08:18:23.239957457 +0000 UTC m=+0.302039353 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, io.buildah.version=1.41.5, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Feb 23 03:18:23 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:18:23 localhost podman[80470]: 2026-02-23 08:18:23.260703728 +0000 UTC m=+0.305674283 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:18:23 localhost podman[80458]: 2026-02-23 08:18:23.269845278 +0000 UTC m=+0.321958430 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Feb 23 03:18:23 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:18:23 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:18:23 localhost podman[80450]: 2026-02-23 08:18:23.163975722 +0000 UTC m=+0.226515272 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:18:23 localhost podman[80450]: 2026-02-23 08:18:23.343007666 +0000 UTC m=+0.405547196 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, batch=17.1_20260112.1, distribution-scope=public, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true) Feb 23 03:18:23 localhost podman[80439]: 2026-02-23 08:18:23.34969878 +0000 UTC m=+0.417895022 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, batch=17.1_20260112.1, vcs-type=git, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:18:23 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:18:23 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:18:24 localhost systemd[1]: tmp-crun.FqfN7C.mount: Deactivated successfully. Feb 23 03:18:26 localhost sshd[80598]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:18:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:18:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:18:26 localhost systemd[1]: tmp-crun.FoNftF.mount: Deactivated successfully. Feb 23 03:18:26 localhost podman[80601]: 2026-02-23 08:18:26.336974308 +0000 UTC m=+0.089244599 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:18:26 localhost systemd[1]: tmp-crun.GFHwXa.mount: Deactivated successfully. Feb 23 03:18:26 localhost podman[80600]: 2026-02-23 08:18:26.372604744 +0000 UTC m=+0.128109624 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller) Feb 23 03:18:26 localhost podman[80601]: 2026-02-23 08:18:26.384686292 +0000 UTC m=+0.136956583 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 23 03:18:26 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:18:26 localhost podman[80600]: 2026-02-23 08:18:26.399743001 +0000 UTC m=+0.155247901 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, container_name=ovn_controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:18:26 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:18:26 localhost sshd[80646]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:18:26 localhost systemd-logind[759]: New session 34 of user zuul. Feb 23 03:18:26 localhost systemd[1]: Started Session 34 of User zuul. Feb 23 03:18:27 localhost python3[80665]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 03:18:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:18:30 localhost podman[80667]: 2026-02-23 08:18:30.01358277 +0000 UTC m=+0.081775152 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true) Feb 23 03:18:30 localhost podman[80667]: 2026-02-23 08:18:30.177016659 +0000 UTC m=+0.245209041 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5) Feb 23 03:18:30 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:18:54 localhost podman[80779]: 2026-02-23 08:18:54.040597864 +0000 UTC m=+0.089688513 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, container_name=nova_compute, managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git) Feb 23 03:18:54 localhost podman[80779]: 2026-02-23 08:18:54.06704778 +0000 UTC m=+0.116138449 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, container_name=nova_compute, config_id=tripleo_step5, version=17.1.13) Feb 23 03:18:54 localhost systemd[1]: tmp-crun.g4V6Kx.mount: Deactivated successfully. Feb 23 03:18:54 localhost podman[80773]: 2026-02-23 08:18:54.106347637 +0000 UTC m=+0.174046144 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, container_name=collectd, io.buildah.version=1.41.5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:18:54 localhost podman[80773]: 2026-02-23 08:18:54.114790564 +0000 UTC m=+0.182489041 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.13, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=collectd, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git) Feb 23 03:18:54 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:18:54 localhost podman[80794]: 2026-02-23 08:18:54.1547161 +0000 UTC m=+0.189761681 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:18:54 localhost podman[80774]: 2026-02-23 08:18:54.193331927 +0000 UTC m=+0.258803565 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, release=1766032510, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Feb 23 03:18:54 localhost podman[80794]: 2026-02-23 08:18:54.205802267 +0000 UTC m=+0.240847868 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:18:54 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:18:54 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:18:54 localhost podman[80787]: 2026-02-23 08:18:54.253582043 +0000 UTC m=+0.303893810 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:18:54 localhost podman[80793]: 2026-02-23 08:18:54.294622513 +0000 UTC m=+0.341576398 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, build-date=2026-01-12T23:07:47Z, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Feb 23 03:18:54 localhost podman[80787]: 2026-02-23 08:18:54.301958616 +0000 UTC m=+0.352270333 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, release=1766032510, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Feb 23 03:18:54 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:18:54 localhost podman[80793]: 2026-02-23 08:18:54.319719677 +0000 UTC m=+0.366673552 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 23 03:18:54 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:18:54 localhost podman[80775]: 2026-02-23 08:18:54.394088483 +0000 UTC m=+0.455494698 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, container_name=iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:18:54 localhost podman[80775]: 2026-02-23 08:18:54.404764129 +0000 UTC m=+0.466170344 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team) Feb 23 03:18:54 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:18:54 localhost podman[80774]: 2026-02-23 08:18:54.54923726 +0000 UTC m=+0.614708928 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, distribution-scope=public, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:18:54 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:18:55 localhost systemd[1]: tmp-crun.oIgfKc.mount: Deactivated successfully. Feb 23 03:18:56 localhost python3[80944]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 23 03:18:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:18:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:18:57 localhost podman[80947]: 2026-02-23 08:18:57.007883974 +0000 UTC m=+0.080053010 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent) Feb 23 03:18:57 localhost podman[80946]: 2026-02-23 08:18:57.056949569 +0000 UTC m=+0.130989771 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, batch=17.1_20260112.1) Feb 23 03:18:57 localhost podman[80947]: 2026-02-23 08:18:57.071837543 +0000 UTC m=+0.144006579 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20260112.1, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13) Feb 23 03:18:57 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:18:57 localhost podman[80946]: 2026-02-23 08:18:57.112889383 +0000 UTC m=+0.186929525 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:18:57 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:19:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:19:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4695 writes, 21K keys, 4695 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4695 writes, 490 syncs, 9.58 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:19:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:19:00 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 03:19:00 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 03:19:00 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 03:19:00 localhost systemd[1]: tmp-crun.zlvt3P.mount: Deactivated successfully. Feb 23 03:19:00 localhost podman[80998]: 2026-02-23 08:19:00.614367719 +0000 UTC m=+0.111619252 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:19:00 localhost podman[80998]: 2026-02-23 08:19:00.825397959 +0000 UTC m=+0.322649492 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, batch=17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true) Feb 23 03:19:00 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:19:00 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 03:19:00 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 03:19:00 localhost systemd[1]: run-rf0d8ab495752476dba1ce93aabdf1c77.service: Deactivated successfully. Feb 23 03:19:00 localhost systemd[1]: run-rdc40044c836c4569afee22ab6f044567.service: Deactivated successfully. Feb 23 03:19:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:19:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4923 writes, 21K keys, 4923 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4923 writes, 558 syncs, 8.82 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:19:13 localhost sshd[81170]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:19:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:19:15 localhost recover_tripleo_nova_virtqemud[81173]: 62457 Feb 23 03:19:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:19:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:19:25 localhost podman[81220]: 2026-02-23 08:19:25.036788472 +0000 UTC m=+0.099974688 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, io.buildah.version=1.41.5, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1766032510, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Feb 23 03:19:25 localhost systemd[1]: tmp-crun.zbJME6.mount: Deactivated successfully. Feb 23 03:19:25 localhost podman[81244]: 2026-02-23 08:19:25.101904425 +0000 UTC m=+0.148683191 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 23 03:19:25 localhost podman[81244]: 2026-02-23 08:19:25.13488435 +0000 UTC m=+0.181663116 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, release=1766032510, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z) Feb 23 03:19:25 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:19:25 localhost podman[81250]: 2026-02-23 08:19:25.183586784 +0000 UTC m=+0.228274326 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:19:25 localhost podman[81222]: 2026-02-23 08:19:25.140017747 +0000 UTC m=+0.197786837 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, container_name=nova_compute, version=17.1.13, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc.) Feb 23 03:19:25 localhost podman[81250]: 2026-02-23 08:19:25.211891367 +0000 UTC m=+0.256578909 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, url=https://www.redhat.com, release=1766032510, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:19:25 localhost podman[81222]: 2026-02-23 08:19:25.226810071 +0000 UTC m=+0.284579151 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:19:25 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:19:25 localhost podman[81219]: 2026-02-23 08:19:25.245213742 +0000 UTC m=+0.308326864 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, vcs-type=git, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, release=1766032510, io.openshift.expose-services=, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64) Feb 23 03:19:25 localhost podman[81219]: 2026-02-23 08:19:25.257898208 +0000 UTC m=+0.321011360 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_id=tripleo_step3, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:19:25 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:19:25 localhost podman[81221]: 2026-02-23 08:19:25.302415824 +0000 UTC m=+0.361175324 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, batch=17.1_20260112.1) Feb 23 03:19:25 localhost podman[81221]: 2026-02-23 08:19:25.313714258 +0000 UTC m=+0.372473778 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z) Feb 23 03:19:25 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:19:25 localhost podman[81220]: 2026-02-23 08:19:25.398755279 +0000 UTC m=+0.461941495 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=nova_migration_target) Feb 23 03:19:25 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:19:25 localhost podman[81233]: 2026-02-23 08:19:25.40009619 +0000 UTC m=+0.453798716 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, tcib_managed=true, url=https://www.redhat.com) Feb 23 03:19:25 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:19:25 localhost podman[81233]: 2026-02-23 08:19:25.487849793 +0000 UTC m=+0.541552319 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, architecture=x86_64, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, batch=17.1_20260112.1) Feb 23 03:19:25 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:19:28 localhost podman[81379]: 2026-02-23 08:19:28.015118798 +0000 UTC m=+0.085172785 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4) Feb 23 03:19:28 localhost podman[81380]: 2026-02-23 08:19:28.061086139 +0000 UTC m=+0.128568508 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.13, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:19:28 localhost podman[81379]: 2026-02-23 08:19:28.068898888 +0000 UTC m=+0.138952915 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:19:28 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:19:28 localhost podman[81380]: 2026-02-23 08:19:28.094867748 +0000 UTC m=+0.162350157 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, batch=17.1_20260112.1, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:19:28 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:19:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:19:30 localhost podman[81428]: 2026-02-23 08:19:30.998720026 +0000 UTC m=+0.076440749 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 23 03:19:31 localhost podman[81428]: 2026-02-23 08:19:31.195870033 +0000 UTC m=+0.273590826 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:19:31 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:19:37 localhost python3[81472]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 03:19:41 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 03:19:41 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:19:56 localhost podman[81793]: 2026-02-23 08:19:56.02123213 +0000 UTC m=+0.088444916 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:19:56 localhost systemd[1]: tmp-crun.TClNfk.mount: Deactivated successfully. Feb 23 03:19:56 localhost podman[81813]: 2026-02-23 08:19:56.08589811 +0000 UTC m=+0.142862214 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, version=17.1.13, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:19:56 localhost podman[81813]: 2026-02-23 08:19:56.136485281 +0000 UTC m=+0.193449375 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:19:56 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:19:56 localhost podman[81819]: 2026-02-23 08:19:56.186456154 +0000 UTC m=+0.239938422 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 23 03:19:56 localhost podman[81794]: 2026-02-23 08:19:56.239627362 +0000 UTC m=+0.306418826 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, container_name=iscsid, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:19:56 localhost podman[81795]: 2026-02-23 08:19:56.137994196 +0000 UTC m=+0.200165768 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, container_name=nova_compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13) Feb 23 03:19:56 localhost podman[81792]: 2026-02-23 08:19:56.296224777 +0000 UTC m=+0.362562557 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd) Feb 23 03:19:56 localhost podman[81792]: 2026-02-23 08:19:56.308764189 +0000 UTC m=+0.375101919 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, release=1766032510, url=https://www.redhat.com, container_name=collectd, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, batch=17.1_20260112.1) Feb 23 03:19:56 localhost podman[81819]: 2026-02-23 08:19:56.317273638 +0000 UTC m=+0.370755906 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc.) Feb 23 03:19:56 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:19:56 localhost podman[81794]: 2026-02-23 08:19:56.325592652 +0000 UTC m=+0.392384166 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, com.redhat.component=openstack-iscsid-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:19:56 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:19:56 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:19:56 localhost podman[81795]: 2026-02-23 08:19:56.371414058 +0000 UTC m=+0.433585630 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:19:56 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:19:56 localhost podman[81796]: 2026-02-23 08:19:56.389710445 +0000 UTC m=+0.448838995 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, tcib_managed=true) Feb 23 03:19:56 localhost podman[81793]: 2026-02-23 08:19:56.394586234 +0000 UTC m=+0.461799070 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible) Feb 23 03:19:56 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:19:56 localhost podman[81796]: 2026-02-23 08:19:56.425841506 +0000 UTC m=+0.484970066 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:19:56 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:19:59 localhost podman[81946]: 2026-02-23 08:19:59.002045542 +0000 UTC m=+0.075800970 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:19:59 localhost systemd[1]: tmp-crun.YH0hQN.mount: Deactivated successfully. Feb 23 03:19:59 localhost podman[81945]: 2026-02-23 08:19:59.058732269 +0000 UTC m=+0.133721995 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, io.buildah.version=1.41.5) Feb 23 03:19:59 localhost podman[81945]: 2026-02-23 08:19:59.084863015 +0000 UTC m=+0.159852751 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, vcs-type=git, release=1766032510, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, version=17.1.13, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:19:59 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:19:59 localhost podman[81946]: 2026-02-23 08:19:59.138848089 +0000 UTC m=+0.212603477 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.5, config_id=tripleo_step4, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:19:59 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:20:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:20:02 localhost podman[81993]: 2026-02-23 08:20:02.004566805 +0000 UTC m=+0.079785751 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:20:02 localhost podman[81993]: 2026-02-23 08:20:02.202460285 +0000 UTC m=+0.277679211 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container) Feb 23 03:20:02 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:20:03 localhost sshd[82022]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:20:27 localhost systemd[1]: tmp-crun.0ClQtq.mount: Deactivated successfully. Feb 23 03:20:27 localhost podman[82094]: 2026-02-23 08:20:27.05607986 +0000 UTC m=+0.102424732 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 23 03:20:27 localhost podman[82069]: 2026-02-23 08:20:27.038467994 +0000 UTC m=+0.110636412 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, batch=17.1_20260112.1, config_id=tripleo_step3, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 23 03:20:27 localhost podman[82077]: 2026-02-23 08:20:27.091713046 +0000 UTC m=+0.153577950 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:20:27 localhost podman[82094]: 2026-02-23 08:20:27.09974059 +0000 UTC m=+0.146085442 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, architecture=x86_64, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:20:27 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:20:27 localhost podman[82088]: 2026-02-23 08:20:27.139479221 +0000 UTC m=+0.197651143 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, config_id=tripleo_step4, url=https://www.redhat.com, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:20:27 localhost podman[82101]: 2026-02-23 08:20:27.100353519 +0000 UTC m=+0.144129833 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible) Feb 23 03:20:27 localhost podman[82088]: 2026-02-23 08:20:27.171729633 +0000 UTC m=+0.229901565 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, version=17.1.13, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc.) Feb 23 03:20:27 localhost podman[82101]: 2026-02-23 08:20:27.183676617 +0000 UTC m=+0.227452911 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:20:27 localhost podman[82071]: 2026-02-23 08:20:27.189220416 +0000 UTC m=+0.255113764 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.13, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1) Feb 23 03:20:27 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:20:27 localhost podman[82071]: 2026-02-23 08:20:27.194572579 +0000 UTC m=+0.260465917 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:20:27 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:20:27 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:20:27 localhost podman[82077]: 2026-02-23 08:20:27.240259951 +0000 UTC m=+0.302124835 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=nova_compute, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step5) Feb 23 03:20:27 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:20:27 localhost podman[82069]: 2026-02-23 08:20:27.273218055 +0000 UTC m=+0.345386443 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:20:27 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:20:27 localhost podman[82070]: 2026-02-23 08:20:27.318504525 +0000 UTC m=+0.385877377 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.13, url=https://www.redhat.com, container_name=nova_migration_target, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z) Feb 23 03:20:27 localhost podman[82070]: 2026-02-23 08:20:27.635897755 +0000 UTC m=+0.703270557 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=) Feb 23 03:20:27 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:20:30 localhost podman[82226]: 2026-02-23 08:20:30.013221741 +0000 UTC m=+0.090019214 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 23 03:20:30 localhost systemd[1]: tmp-crun.hZ9sHq.mount: Deactivated successfully. Feb 23 03:20:30 localhost podman[82227]: 2026-02-23 08:20:30.067148535 +0000 UTC m=+0.140287715 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, build-date=2026-01-12T22:56:19Z, architecture=x86_64, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:20:30 localhost podman[82226]: 2026-02-23 08:20:30.120415567 +0000 UTC m=+0.197213080 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, distribution-scope=public, container_name=ovn_controller, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.13, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container) Feb 23 03:20:30 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:20:30 localhost podman[82227]: 2026-02-23 08:20:30.144012786 +0000 UTC m=+0.217151976 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, release=1766032510, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:20:30 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:20:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:20:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:20:32 localhost recover_tripleo_nova_virtqemud[82280]: 62457 Feb 23 03:20:32 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:20:32 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:20:32 localhost systemd[1]: tmp-crun.KiA64c.mount: Deactivated successfully. Feb 23 03:20:33 localhost podman[82273]: 2026-02-23 08:20:33.003796492 +0000 UTC m=+0.079888215 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, container_name=metrics_qdr, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step1) Feb 23 03:20:33 localhost podman[82273]: 2026-02-23 08:20:33.200064791 +0000 UTC m=+0.276156494 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, tcib_managed=true, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, release=1766032510, distribution-scope=public, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:20:33 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:20:33 localhost python3[82321]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 03:20:37 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 03:20:37 localhost rhsm-service[6643]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 23 03:20:54 localhost sshd[82585]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:20:58 localhost systemd[1]: tmp-crun.XB007C.mount: Deactivated successfully. Feb 23 03:20:58 localhost podman[82591]: 2026-02-23 08:20:58.023525642 +0000 UTC m=+0.086175578 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, distribution-scope=public, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Feb 23 03:20:58 localhost podman[82591]: 2026-02-23 08:20:58.034535529 +0000 UTC m=+0.097185525 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.expose-services=) Feb 23 03:20:58 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:20:58 localhost podman[82619]: 2026-02-23 08:20:58.082914969 +0000 UTC m=+0.130099732 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:20:58 localhost podman[82607]: 2026-02-23 08:20:58.098047823 +0000 UTC m=+0.147498755 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, vcs-type=git, tcib_managed=true, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:20:58 localhost podman[82589]: 2026-02-23 08:20:58.119306183 +0000 UTC m=+0.187704805 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3) Feb 23 03:20:58 localhost podman[82589]: 2026-02-23 08:20:58.130921249 +0000 UTC m=+0.199319851 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=iscsid, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z) Feb 23 03:20:58 localhost podman[82590]: 2026-02-23 08:20:58.036110618 +0000 UTC m=+0.100620551 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vendor=Red Hat, Inc.) Feb 23 03:20:58 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:20:58 localhost podman[82607]: 2026-02-23 08:20:58.147088873 +0000 UTC m=+0.196539785 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, distribution-scope=public, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:20:58 localhost podman[82590]: 2026-02-23 08:20:58.170719647 +0000 UTC m=+0.235229570 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:20:58 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:20:58 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:20:58 localhost podman[82587]: 2026-02-23 08:20:58.215371203 +0000 UTC m=+0.285605021 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public) Feb 23 03:20:58 localhost podman[82587]: 2026-02-23 08:20:58.228620448 +0000 UTC m=+0.298854266 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, version=17.1.13, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Feb 23 03:20:58 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:20:58 localhost podman[82588]: 2026-02-23 08:20:58.274712299 +0000 UTC m=+0.344697039 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, architecture=x86_64, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:20:58 localhost podman[82619]: 2026-02-23 08:20:58.300767516 +0000 UTC m=+0.347952329 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc.) Feb 23 03:20:58 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:20:58 localhost podman[82588]: 2026-02-23 08:20:58.637123659 +0000 UTC m=+0.707108419 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, container_name=nova_migration_target) Feb 23 03:20:58 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:21:01 localhost systemd[1]: tmp-crun.63PKLV.mount: Deactivated successfully. Feb 23 03:21:01 localhost podman[82743]: 2026-02-23 08:21:01.010959012 +0000 UTC m=+0.086359464 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, version=17.1.13, vcs-type=git, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Feb 23 03:21:01 localhost podman[82744]: 2026-02-23 08:21:01.079622203 +0000 UTC m=+0.151029673 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.13, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, release=1766032510, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:21:01 localhost podman[82743]: 2026-02-23 08:21:01.092102765 +0000 UTC m=+0.167503287 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5) Feb 23 03:21:01 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:21:01 localhost podman[82744]: 2026-02-23 08:21:01.162216481 +0000 UTC m=+0.233623951 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:21:01 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:21:01 localhost python3[82801]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Feb 23 03:21:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:21:04 localhost podman[82802]: 2026-02-23 08:21:04.021706973 +0000 UTC m=+0.090818609 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, version=17.1.13, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 23 03:21:04 localhost podman[82802]: 2026-02-23 08:21:04.217148864 +0000 UTC m=+0.286260520 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:21:04 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:21:29 localhost podman[82888]: 2026-02-23 08:21:29.001907244 +0000 UTC m=+0.066379002 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:21:29 localhost podman[82886]: 2026-02-23 08:21:29.008076413 +0000 UTC m=+0.071109587 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, architecture=x86_64, maintainer=OpenStack TripleO Team) Feb 23 03:21:29 localhost podman[82886]: 2026-02-23 08:21:29.013007254 +0000 UTC m=+0.076040428 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, config_id=tripleo_step4, vendor=Red Hat, Inc.) Feb 23 03:21:29 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:21:29 localhost podman[82903]: 2026-02-23 08:21:29.067979956 +0000 UTC m=+0.126462051 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 23 03:21:29 localhost podman[82884]: 2026-02-23 08:21:29.090171955 +0000 UTC m=+0.162594076 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:21:29 localhost podman[82878]: 2026-02-23 08:21:29.117612935 +0000 UTC m=+0.189291424 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Feb 23 03:21:29 localhost podman[82884]: 2026-02-23 08:21:29.13673083 +0000 UTC m=+0.209152931 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, distribution-scope=public, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true) Feb 23 03:21:29 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:21:29 localhost podman[82879]: 2026-02-23 08:21:29.146837559 +0000 UTC m=+0.219213759 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, container_name=iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, release=1766032510) Feb 23 03:21:29 localhost podman[82877]: 2026-02-23 08:21:29.049538881 +0000 UTC m=+0.127525242 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git) Feb 23 03:21:29 localhost podman[82879]: 2026-02-23 08:21:29.153687899 +0000 UTC m=+0.226064099 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:21:29 localhost podman[82903]: 2026-02-23 08:21:29.173452614 +0000 UTC m=+0.231934699 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:21:29 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:21:29 localhost podman[82877]: 2026-02-23 08:21:29.182864762 +0000 UTC m=+0.260851143 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:21:29 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:21:29 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:21:29 localhost podman[82888]: 2026-02-23 08:21:29.224710842 +0000 UTC m=+0.289182570 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, architecture=x86_64, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com) Feb 23 03:21:29 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:21:29 localhost podman[82878]: 2026-02-23 08:21:29.471743862 +0000 UTC m=+0.543422381 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=) Feb 23 03:21:29 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:21:31 localhost systemd[1]: tmp-crun.XtMJTT.mount: Deactivated successfully. Feb 23 03:21:31 localhost podman[83036]: 2026-02-23 08:21:31.996585964 +0000 UTC m=+0.070765276 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 23 03:21:32 localhost podman[83036]: 2026-02-23 08:21:32.03565827 +0000 UTC m=+0.109837562 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:21:32 localhost podman[83035]: 2026-02-23 08:21:32.042104098 +0000 UTC m=+0.119236770 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, distribution-scope=public, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64) Feb 23 03:21:32 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:21:32 localhost podman[83035]: 2026-02-23 08:21:32.088909 +0000 UTC m=+0.166041702 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com) Feb 23 03:21:32 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:21:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:21:35 localhost systemd[1]: tmp-crun.8b7ZCs.mount: Deactivated successfully. Feb 23 03:21:35 localhost podman[83082]: 2026-02-23 08:21:35.011684181 +0000 UTC m=+0.083457165 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, tcib_managed=true, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 23 03:21:35 localhost podman[83082]: 2026-02-23 08:21:35.233982833 +0000 UTC m=+0.305755847 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, config_id=tripleo_step1, distribution-scope=public, version=17.1.13) Feb 23 03:21:35 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:21:41 localhost sshd[83111]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:21:43 localhost sshd[83113]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:21:47 localhost systemd[1]: tmp-crun.rOUyZK.mount: Deactivated successfully. Feb 23 03:21:47 localhost podman[83216]: 2026-02-23 08:21:47.715652775 +0000 UTC m=+0.095070590 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, architecture=x86_64, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, com.redhat.component=rhceph-container) Feb 23 03:21:47 localhost podman[83216]: 2026-02-23 08:21:47.817283905 +0000 UTC m=+0.196701710 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, RELEASE=main) Feb 23 03:21:55 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:21:55 localhost recover_tripleo_nova_virtqemud[83360]: 62457 Feb 23 03:21:55 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:21:55 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:22:00 localhost systemd[1]: tmp-crun.AYNn3B.mount: Deactivated successfully. Feb 23 03:22:00 localhost podman[83369]: 2026-02-23 08:22:00.09050114 +0000 UTC m=+0.141746109 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, distribution-scope=public, architecture=x86_64, version=17.1.13, container_name=logrotate_crond, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, release=1766032510) Feb 23 03:22:00 localhost podman[83363]: 2026-02-23 08:22:00.120504258 +0000 UTC m=+0.182306070 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, version=17.1.13, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, com.redhat.component=openstack-iscsid-container) Feb 23 03:22:00 localhost podman[83363]: 2026-02-23 08:22:00.12647075 +0000 UTC m=+0.188272562 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, architecture=x86_64, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, url=https://www.redhat.com) Feb 23 03:22:00 localhost podman[83361]: 2026-02-23 08:22:00.080970728 +0000 UTC m=+0.144849954 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, container_name=collectd, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 23 03:22:00 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:22:00 localhost podman[83387]: 2026-02-23 08:22:00.101667601 +0000 UTC m=+0.142674267 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, distribution-scope=public, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:22:00 localhost podman[83369]: 2026-02-23 08:22:00.176835562 +0000 UTC m=+0.228080501 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:22:00 localhost podman[83362]: 2026-02-23 08:22:00.18495767 +0000 UTC m=+0.248822135 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, tcib_managed=true, release=1766032510, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:22:00 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:22:00 localhost podman[83361]: 2026-02-23 08:22:00.210089099 +0000 UTC m=+0.273968295 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, architecture=x86_64, release=1766032510) Feb 23 03:22:00 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:22:00 localhost podman[83364]: 2026-02-23 08:22:00.027498051 +0000 UTC m=+0.088683234 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=nova_compute, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:22:00 localhost podman[83364]: 2026-02-23 08:22:00.2607525 +0000 UTC m=+0.321937653 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc.) Feb 23 03:22:00 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:22:00 localhost podman[83387]: 2026-02-23 08:22:00.281217366 +0000 UTC m=+0.322224012 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi) Feb 23 03:22:00 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:22:00 localhost podman[83386]: 2026-02-23 08:22:00.058102588 +0000 UTC m=+0.098208336 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5) Feb 23 03:22:00 localhost podman[83386]: 2026-02-23 08:22:00.339765757 +0000 UTC m=+0.379871475 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com) Feb 23 03:22:00 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:22:00 localhost podman[83362]: 2026-02-23 08:22:00.508851161 +0000 UTC m=+0.572715666 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, release=1766032510, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:22:00 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:22:02 localhost systemd[1]: session-34.scope: Deactivated successfully. Feb 23 03:22:02 localhost systemd[1]: session-34.scope: Consumed 18.939s CPU time. Feb 23 03:22:02 localhost systemd-logind[759]: Session 34 logged out. Waiting for processes to exit. Feb 23 03:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:22:02 localhost systemd-logind[759]: Removed session 34. Feb 23 03:22:02 localhost podman[83517]: 2026-02-23 08:22:02.178086212 +0000 UTC m=+0.071018004 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, build-date=2026-01-12T22:56:19Z, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 23 03:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:22:02 localhost podman[83517]: 2026-02-23 08:22:02.226810773 +0000 UTC m=+0.119742545 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, architecture=x86_64, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 23 03:22:02 localhost podman[83538]: 2026-02-23 08:22:02.252471118 +0000 UTC m=+0.047321539 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:22:02 localhost podman[83538]: 2026-02-23 08:22:02.274816392 +0000 UTC m=+0.069666843 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc.) Feb 23 03:22:02 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:22:02 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:22:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:22:05 localhost podman[83566]: 2026-02-23 08:22:05.989512515 +0000 UTC m=+0.068049254 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc.) Feb 23 03:22:06 localhost podman[83566]: 2026-02-23 08:22:06.19591479 +0000 UTC m=+0.274451539 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, release=1766032510, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container) Feb 23 03:22:06 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:22:31 localhost podman[83655]: 2026-02-23 08:22:31.054730325 +0000 UTC m=+0.107831230 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:22:31 localhost podman[83643]: 2026-02-23 08:22:31.033800065 +0000 UTC m=+0.096163804 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, release=1766032510, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:22:31 localhost podman[83655]: 2026-02-23 08:22:31.138936332 +0000 UTC m=+0.192037277 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team) Feb 23 03:22:31 localhost podman[83641]: 2026-02-23 08:22:31.147793793 +0000 UTC m=+0.215044551 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13) Feb 23 03:22:31 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:22:31 localhost podman[83642]: 2026-02-23 08:22:31.189977824 +0000 UTC m=+0.253684114 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step3) Feb 23 03:22:31 localhost podman[83649]: 2026-02-23 08:22:31.200699722 +0000 UTC m=+0.256344235 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:22:31 localhost podman[83642]: 2026-02-23 08:22:31.203394705 +0000 UTC m=+0.267100975 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, com.redhat.component=openstack-iscsid-container, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:22:31 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:22:31 localhost podman[83643]: 2026-02-23 08:22:31.219736465 +0000 UTC m=+0.282100204 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, release=1766032510, distribution-scope=public, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Feb 23 03:22:31 localhost podman[83640]: 2026-02-23 08:22:31.078575516 +0000 UTC m=+0.148627079 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, release=1766032510, version=17.1.13, vcs-type=git, container_name=collectd, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:22:31 localhost podman[83649]: 2026-02-23 08:22:31.239734767 +0000 UTC m=+0.295379280 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5) Feb 23 03:22:31 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:22:31 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:22:31 localhost podman[83665]: 2026-02-23 08:22:31.106067226 +0000 UTC m=+0.153543749 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.) Feb 23 03:22:31 localhost podman[83640]: 2026-02-23 08:22:31.260854273 +0000 UTC m=+0.330905876 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:22:31 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:22:31 localhost podman[83665]: 2026-02-23 08:22:31.342817641 +0000 UTC m=+0.390294174 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, distribution-scope=public, vcs-type=git) Feb 23 03:22:31 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:22:31 localhost podman[83641]: 2026-02-23 08:22:31.475814691 +0000 UTC m=+0.543065489 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, container_name=nova_migration_target, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:22:31 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:22:31 localhost sshd[83798]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:22:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:22:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:22:33 localhost podman[83800]: 2026-02-23 08:22:33.01134059 +0000 UTC m=+0.085512568 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5) Feb 23 03:22:33 localhost podman[83801]: 2026-02-23 08:22:33.061510554 +0000 UTC m=+0.132465814 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 23 03:22:33 localhost podman[83801]: 2026-02-23 08:22:33.101380405 +0000 UTC m=+0.172335635 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.expose-services=, release=1766032510, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public) Feb 23 03:22:33 localhost podman[83800]: 2026-02-23 08:22:33.11428747 +0000 UTC m=+0.188459448 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=ovn_controller, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc.) Feb 23 03:22:33 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:22:33 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:22:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:22:37 localhost podman[83850]: 2026-02-23 08:22:36.998527422 +0000 UTC m=+0.074726438 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, architecture=x86_64, release=1766032510, vcs-type=git, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step1) Feb 23 03:22:37 localhost podman[83850]: 2026-02-23 08:22:37.215938705 +0000 UTC m=+0.292137711 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step1, release=1766032510, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 23 03:22:37 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:23:02 localhost podman[83957]: 2026-02-23 08:23:02.039755853 +0000 UTC m=+0.104842670 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, container_name=collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 23 03:23:02 localhost podman[83957]: 2026-02-23 08:23:02.074375302 +0000 UTC m=+0.139462159 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, architecture=x86_64, container_name=collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd) Feb 23 03:23:02 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:23:02 localhost podman[83965]: 2026-02-23 08:23:02.093008813 +0000 UTC m=+0.142483292 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., release=1766032510, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Feb 23 03:23:02 localhost podman[83965]: 2026-02-23 08:23:02.101690348 +0000 UTC m=+0.151164827 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron) Feb 23 03:23:02 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:23:02 localhost podman[83977]: 2026-02-23 08:23:02.074922708 +0000 UTC m=+0.120461367 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 23 03:23:02 localhost podman[83964]: 2026-02-23 08:23:02.139768993 +0000 UTC m=+0.190236503 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_id=tripleo_step5, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 23 03:23:02 localhost podman[83959]: 2026-02-23 08:23:02.183746959 +0000 UTC m=+0.240663616 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true) Feb 23 03:23:02 localhost podman[83964]: 2026-02-23 08:23:02.191707583 +0000 UTC m=+0.242175093 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, url=https://www.redhat.com, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, tcib_managed=true, vcs-type=git, batch=17.1_20260112.1, container_name=nova_compute) Feb 23 03:23:02 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:23:02 localhost podman[83959]: 2026-02-23 08:23:02.218834663 +0000 UTC m=+0.275751320 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, build-date=2026-01-12T22:34:43Z, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:23:02 localhost podman[83976]: 2026-02-23 08:23:02.239054962 +0000 UTC m=+0.285106696 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Feb 23 03:23:02 localhost podman[83977]: 2026-02-23 08:23:02.313076826 +0000 UTC m=+0.358615545 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.13, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:23:02 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:23:02 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:23:02 localhost podman[83976]: 2026-02-23 08:23:02.366860952 +0000 UTC m=+0.412912666 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z) Feb 23 03:23:02 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:23:02 localhost podman[83958]: 2026-02-23 08:23:02.324876557 +0000 UTC m=+0.382893187 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, release=1766032510, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.) Feb 23 03:23:02 localhost podman[83958]: 2026-02-23 08:23:02.689891138 +0000 UTC m=+0.747907768 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, architecture=x86_64, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.component=openstack-nova-compute-container) Feb 23 03:23:02 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:23:03 localhost systemd[1]: tmp-crun.3CXjCa.mount: Deactivated successfully. Feb 23 03:23:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:23:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:23:04 localhost podman[84118]: 2026-02-23 08:23:04.020395652 +0000 UTC m=+0.088759837 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git) Feb 23 03:23:04 localhost podman[84117]: 2026-02-23 08:23:04.073386894 +0000 UTC m=+0.143760131 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:23:04 localhost podman[84118]: 2026-02-23 08:23:04.091942371 +0000 UTC m=+0.160306566 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.openshift.expose-services=, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com) Feb 23 03:23:04 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:23:04 localhost podman[84117]: 2026-02-23 08:23:04.147964976 +0000 UTC m=+0.218338233 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, version=17.1.13, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible) Feb 23 03:23:04 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:23:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:23:08 localhost podman[84167]: 2026-02-23 08:23:08.018333683 +0000 UTC m=+0.090547632 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, tcib_managed=true) Feb 23 03:23:08 localhost podman[84167]: 2026-02-23 08:23:08.196934669 +0000 UTC m=+0.269148648 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:23:08 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:23:18 localhost sshd[84196]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:23:21 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Feb 23 03:23:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:23:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:23:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:23:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:23:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:23:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:23:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:23:33 localhost systemd[1]: tmp-crun.p5jpg0.mount: Deactivated successfully. Feb 23 03:23:33 localhost podman[84258]: 2026-02-23 08:23:33.407064294 +0000 UTC m=+0.059506013 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Feb 23 03:23:33 localhost podman[84245]: 2026-02-23 08:23:33.417242545 +0000 UTC m=+0.075653776 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-type=git, url=https://www.redhat.com, release=1766032510, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:23:33 localhost podman[84258]: 2026-02-23 08:23:33.421815315 +0000 UTC m=+0.074257044 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, tcib_managed=true, io.openshift.expose-services=, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:23:33 localhost podman[84245]: 2026-02-23 08:23:33.423621081 +0000 UTC m=+0.082032342 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container) Feb 23 03:23:33 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:23:33 localhost podman[84264]: 2026-02-23 08:23:33.446970425 +0000 UTC m=+0.097109593 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, tcib_managed=true, container_name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:23:33 localhost podman[84244]: 2026-02-23 08:23:33.401309888 +0000 UTC m=+0.060986567 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-type=git, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:23:33 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:23:33 localhost podman[84246]: 2026-02-23 08:23:33.508509198 +0000 UTC m=+0.165241437 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:23:33 localhost podman[84246]: 2026-02-23 08:23:33.521200566 +0000 UTC m=+0.177932815 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, vendor=Red Hat, Inc.) Feb 23 03:23:33 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:23:33 localhost podman[84248]: 2026-02-23 08:23:33.555404154 +0000 UTC m=+0.210531475 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, container_name=logrotate_crond, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible) Feb 23 03:23:33 localhost podman[84248]: 2026-02-23 08:23:33.584221975 +0000 UTC m=+0.239349286 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, container_name=logrotate_crond, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:23:33 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:23:33 localhost podman[84243]: 2026-02-23 08:23:33.542537389 +0000 UTC m=+0.203544360 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=) Feb 23 03:23:33 localhost podman[84243]: 2026-02-23 08:23:33.623683923 +0000 UTC m=+0.284690884 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.13, io.buildah.version=1.41.5, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:23:33 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:23:33 localhost podman[84264]: 2026-02-23 08:23:33.636920178 +0000 UTC m=+0.287059346 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi) Feb 23 03:23:33 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:23:33 localhost podman[84244]: 2026-02-23 08:23:33.738761664 +0000 UTC m=+0.398438333 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, container_name=nova_migration_target, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, version=17.1.13, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Feb 23 03:23:33 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:23:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:23:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:23:34 localhost podman[84398]: 2026-02-23 08:23:34.250084481 +0000 UTC m=+0.076695198 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=ovn_metadata_agent, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 23 03:23:34 localhost podman[84399]: 2026-02-23 08:23:34.299521123 +0000 UTC m=+0.122138327 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.13, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, batch=17.1_20260112.1, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git) Feb 23 03:23:34 localhost podman[84398]: 2026-02-23 08:23:34.337106404 +0000 UTC m=+0.163717111 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2026-01-12T22:56:19Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:23:34 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:23:34 localhost podman[84399]: 2026-02-23 08:23:34.352730002 +0000 UTC m=+0.175347216 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true) Feb 23 03:23:34 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:23:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:23:38 localhost podman[84445]: 2026-02-23 08:23:38.99205437 +0000 UTC m=+0.070469237 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, container_name=metrics_qdr, distribution-scope=public, version=17.1.13, config_id=tripleo_step1, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, release=1766032510, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 03:23:39 localhost podman[84445]: 2026-02-23 08:23:39.184932983 +0000 UTC m=+0.263347880 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, container_name=metrics_qdr, io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 23 03:23:39 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:23:55 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:23:55 localhost recover_tripleo_nova_virtqemud[84553]: 62457 Feb 23 03:23:55 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:23:55 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:24:03 localhost podman[84554]: 2026-02-23 08:24:03.99139262 +0000 UTC m=+0.063354889 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, distribution-scope=public, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=collectd) Feb 23 03:24:04 localhost podman[84554]: 2026-02-23 08:24:04.001737307 +0000 UTC m=+0.073699606 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd) Feb 23 03:24:04 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:24:04 localhost systemd[1]: tmp-crun.S10DYN.mount: Deactivated successfully. Feb 23 03:24:04 localhost podman[84558]: 2026-02-23 08:24:04.040242186 +0000 UTC m=+0.102742126 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Feb 23 03:24:04 localhost podman[84583]: 2026-02-23 08:24:04.050625263 +0000 UTC m=+0.111829373 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, vcs-type=git, version=17.1.13, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=) Feb 23 03:24:04 localhost podman[84583]: 2026-02-23 08:24:04.072678678 +0000 UTC m=+0.133882798 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:24:04 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:24:04 localhost podman[84556]: 2026-02-23 08:24:04.099932062 +0000 UTC m=+0.170699295 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:24:04 localhost podman[84556]: 2026-02-23 08:24:04.104636466 +0000 UTC m=+0.175403699 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step3, release=1766032510, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, tcib_managed=true, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5) Feb 23 03:24:04 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:24:04 localhost podman[84558]: 2026-02-23 08:24:04.122815662 +0000 UTC m=+0.185315592 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 23 03:24:04 localhost podman[84557]: 2026-02-23 08:24:04.129152586 +0000 UTC m=+0.198385131 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, vcs-type=git, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container) Feb 23 03:24:04 localhost podman[84555]: 2026-02-23 08:24:04.08286044 +0000 UTC m=+0.154822899 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:24:04 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:24:04 localhost podman[84557]: 2026-02-23 08:24:04.14759449 +0000 UTC m=+0.216827065 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, tcib_managed=true, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team) Feb 23 03:24:04 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:24:04 localhost podman[84570]: 2026-02-23 08:24:04.20312734 +0000 UTC m=+0.268386694 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:24:04 localhost podman[84570]: 2026-02-23 08:24:04.248622332 +0000 UTC m=+0.313881686 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, release=1766032510, build-date=2026-01-12T23:07:47Z) Feb 23 03:24:04 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:24:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:24:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:24:04 localhost podman[84555]: 2026-02-23 08:24:04.412118315 +0000 UTC m=+0.484080774 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public) Feb 23 03:24:04 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:24:04 localhost podman[84709]: 2026-02-23 08:24:04.456535174 +0000 UTC m=+0.046207104 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, managed_by=tripleo_ansible, container_name=ovn_controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.13, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Feb 23 03:24:04 localhost podman[84709]: 2026-02-23 08:24:04.475722941 +0000 UTC m=+0.065394861 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, release=1766032510) Feb 23 03:24:04 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:24:04 localhost podman[84710]: 2026-02-23 08:24:04.514380175 +0000 UTC m=+0.103556480 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, tcib_managed=true) Feb 23 03:24:04 localhost podman[84710]: 2026-02-23 08:24:04.546765825 +0000 UTC m=+0.135942080 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, version=17.1.13, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:24:04 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:24:04 localhost systemd[1]: tmp-crun.L0NtL8.mount: Deactivated successfully. Feb 23 03:24:05 localhost sshd[84758]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:24:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:24:10 localhost podman[84760]: 2026-02-23 08:24:10.002937481 +0000 UTC m=+0.079015009 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true) Feb 23 03:24:10 localhost podman[84760]: 2026-02-23 08:24:10.220789727 +0000 UTC m=+0.296867255 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 03:24:10 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:24:35 localhost systemd[1]: tmp-crun.e2FcOV.mount: Deactivated successfully. Feb 23 03:24:35 localhost podman[84860]: 2026-02-23 08:24:35.036802092 +0000 UTC m=+0.080738301 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, version=17.1.13, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:24:35 localhost systemd[1]: tmp-crun.EbVbgs.mount: Deactivated successfully. Feb 23 03:24:35 localhost podman[84845]: 2026-02-23 08:24:35.050646376 +0000 UTC m=+0.095507474 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:24:35 localhost podman[84860]: 2026-02-23 08:24:35.074765004 +0000 UTC m=+0.118701233 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:24:35 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:24:35 localhost podman[84833]: 2026-02-23 08:24:35.087559865 +0000 UTC m=+0.154392545 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20260112.1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, version=17.1.13, container_name=ovn_controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 23 03:24:35 localhost podman[84833]: 2026-02-23 08:24:35.104592867 +0000 UTC m=+0.171425547 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, version=17.1.13, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Feb 23 03:24:35 localhost podman[84852]: 2026-02-23 08:24:35.129622752 +0000 UTC m=+0.183319400 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1) Feb 23 03:24:35 localhost podman[84873]: 2026-02-23 08:24:35.141014272 +0000 UTC m=+0.179429773 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 23 03:24:35 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:24:35 localhost podman[84852]: 2026-02-23 08:24:35.18182033 +0000 UTC m=+0.235517008 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, architecture=x86_64, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, release=1766032510, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team) Feb 23 03:24:35 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:24:35 localhost podman[84873]: 2026-02-23 08:24:35.192041473 +0000 UTC m=+0.230457004 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:24:35 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:24:35 localhost podman[84869]: 2026-02-23 08:24:35.187249916 +0000 UTC m=+0.226962476 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T23:07:47Z, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., vcs-type=git) Feb 23 03:24:35 localhost podman[84840]: 2026-02-23 08:24:35.248022186 +0000 UTC m=+0.307218122 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, architecture=x86_64) Feb 23 03:24:35 localhost podman[84832]: 2026-02-23 08:24:35.231588533 +0000 UTC m=+0.304468178 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com) Feb 23 03:24:35 localhost podman[84840]: 2026-02-23 08:24:35.286756292 +0000 UTC m=+0.345952278 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, release=1766032510, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., vcs-type=git) Feb 23 03:24:35 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:24:35 localhost podman[84845]: 2026-02-23 08:24:35.314600683 +0000 UTC m=+0.359461791 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Feb 23 03:24:35 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:24:35 localhost podman[84834]: 2026-02-23 08:24:35.28379167 +0000 UTC m=+0.347294368 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:24:35 localhost podman[84832]: 2026-02-23 08:24:35.364916393 +0000 UTC m=+0.437796048 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, release=1766032510, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd) Feb 23 03:24:35 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:24:35 localhost podman[84869]: 2026-02-23 08:24:35.417264105 +0000 UTC m=+0.456976685 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, version=17.1.13, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4) Feb 23 03:24:35 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:24:35 localhost podman[84834]: 2026-02-23 08:24:35.649827311 +0000 UTC m=+0.713330039 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, distribution-scope=public, vcs-type=git, batch=17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, architecture=x86_64) Feb 23 03:24:35 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:24:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:24:41 localhost podman[85034]: 2026-02-23 08:24:41.004779538 +0000 UTC m=+0.081975900 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, version=17.1.13, build-date=2026-01-12T22:10:14Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=) Feb 23 03:24:41 localhost podman[85034]: 2026-02-23 08:24:41.205978326 +0000 UTC m=+0.283174618 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., version=17.1.13, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:24:41 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:24:52 localhost sshd[85064]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:25:06 localhost podman[85143]: 2026-02-23 08:25:06.024978077 +0000 UTC m=+0.094647998 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:25:06 localhost podman[85148]: 2026-02-23 08:25:06.035950793 +0000 UTC m=+0.091068138 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, tcib_managed=true, config_id=tripleo_step5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, release=1766032510, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, container_name=nova_compute, distribution-scope=public) Feb 23 03:25:06 localhost podman[85143]: 2026-02-23 08:25:06.045638869 +0000 UTC m=+0.115308820 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.5, managed_by=tripleo_ansible, container_name=ovn_controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z) Feb 23 03:25:06 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:25:06 localhost systemd[1]: tmp-crun.GrU9NT.mount: Deactivated successfully. Feb 23 03:25:06 localhost podman[85160]: 2026-02-23 08:25:06.083026364 +0000 UTC m=+0.142086050 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1) Feb 23 03:25:06 localhost podman[85148]: 2026-02-23 08:25:06.085716986 +0000 UTC m=+0.140834321 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=nova_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step5, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:25:06 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:25:06 localhost podman[85145]: 2026-02-23 08:25:06.09661815 +0000 UTC m=+0.163452434 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, vcs-type=git, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:25:06 localhost podman[85160]: 2026-02-23 08:25:06.118620802 +0000 UTC m=+0.177680468 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=) Feb 23 03:25:06 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:25:06 localhost podman[85142]: 2026-02-23 08:25:06.13649367 +0000 UTC m=+0.206567383 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=collectd, vcs-type=git, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:25:06 localhost podman[85145]: 2026-02-23 08:25:06.136876601 +0000 UTC m=+0.203710935 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Feb 23 03:25:06 localhost podman[85142]: 2026-02-23 08:25:06.147704823 +0000 UTC m=+0.217778556 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, vcs-type=git, container_name=collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, architecture=x86_64, distribution-scope=public, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:25:06 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:25:06 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:25:06 localhost podman[85171]: 2026-02-23 08:25:06.238625515 +0000 UTC m=+0.290374987 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5) Feb 23 03:25:06 localhost podman[85172]: 2026-02-23 08:25:06.24110243 +0000 UTC m=+0.293484742 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:25:06 localhost podman[85171]: 2026-02-23 08:25:06.290669158 +0000 UTC m=+0.342418620 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:25:06 localhost podman[85144]: 2026-02-23 08:25:06.296416783 +0000 UTC m=+0.363447783 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, container_name=nova_migration_target, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, build-date=2026-01-12T23:32:04Z, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510) Feb 23 03:25:06 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:25:06 localhost podman[85172]: 2026-02-23 08:25:06.341008478 +0000 UTC m=+0.393390790 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:25:06 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:25:06 localhost podman[85146]: 2026-02-23 08:25:06.351908782 +0000 UTC m=+0.412938398 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-iscsid-container, version=17.1.13, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, container_name=iscsid) Feb 23 03:25:06 localhost podman[85146]: 2026-02-23 08:25:06.387864052 +0000 UTC m=+0.448893618 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc.) Feb 23 03:25:06 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:25:06 localhost podman[85144]: 2026-02-23 08:25:06.652228572 +0000 UTC m=+0.719259602 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, version=17.1.13, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:25:06 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:25:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:25:12 localhost podman[85352]: 2026-02-23 08:25:12.00136484 +0000 UTC m=+0.075959535 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:25:12 localhost podman[85352]: 2026-02-23 08:25:12.204877298 +0000 UTC m=+0.279471963 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step1, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1) Feb 23 03:25:12 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:25:35 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:25:35 localhost recover_tripleo_nova_virtqemud[85427]: 62457 Feb 23 03:25:35 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:25:35 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:25:37 localhost podman[85465]: 2026-02-23 08:25:37.031971423 +0000 UTC m=+0.087487698 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, release=1766032510) Feb 23 03:25:37 localhost podman[85431]: 2026-02-23 08:25:37.070498912 +0000 UTC m=+0.139587502 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:25:37 localhost systemd[1]: tmp-crun.Uk6ImW.mount: Deactivated successfully. Feb 23 03:25:37 localhost podman[85429]: 2026-02-23 08:25:37.085229213 +0000 UTC m=+0.157933094 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 23 03:25:37 localhost podman[85435]: 2026-02-23 08:25:37.122867025 +0000 UTC m=+0.186801238 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:25:37 localhost podman[85435]: 2026-02-23 08:25:37.160708053 +0000 UTC m=+0.224642246 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, tcib_managed=true, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:25:37 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:25:37 localhost podman[85451]: 2026-02-23 08:25:37.235144451 +0000 UTC m=+0.291820941 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:25:37 localhost podman[85429]: 2026-02-23 08:25:37.250837761 +0000 UTC m=+0.323541702 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 23 03:25:37 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:25:37 localhost podman[85451]: 2026-02-23 08:25:37.269961766 +0000 UTC m=+0.326638176 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:25:37 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:25:37 localhost podman[85432]: 2026-02-23 08:25:37.287240295 +0000 UTC m=+0.358106319 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, container_name=ovn_metadata_agent, distribution-scope=public, io.buildah.version=1.41.5, architecture=x86_64, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:25:37 localhost podman[85430]: 2026-02-23 08:25:37.386117241 +0000 UTC m=+0.457258424 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, release=1766032510, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1) Feb 23 03:25:37 localhost podman[85431]: 2026-02-23 08:25:37.440837155 +0000 UTC m=+0.509925815 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 23 03:25:37 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:25:37 localhost podman[85465]: 2026-02-23 08:25:37.456415412 +0000 UTC m=+0.511931747 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:25:37 localhost podman[85430]: 2026-02-23 08:25:37.467691557 +0000 UTC m=+0.538832750 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, architecture=x86_64, version=17.1.13, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:25:37 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:25:37 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:25:37 localhost podman[85433]: 2026-02-23 08:25:37.444675293 +0000 UTC m=+0.506482660 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64) Feb 23 03:25:37 localhost podman[85434]: 2026-02-23 08:25:37.528585221 +0000 UTC m=+0.596751913 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, architecture=x86_64) Feb 23 03:25:37 localhost podman[85432]: 2026-02-23 08:25:37.558540097 +0000 UTC m=+0.629406131 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.expose-services=) Feb 23 03:25:37 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:25:37 localhost podman[85433]: 2026-02-23 08:25:37.579253371 +0000 UTC m=+0.641060708 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, config_id=tripleo_step3, vendor=Red Hat, Inc., io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 23 03:25:37 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:25:37 localhost podman[85434]: 2026-02-23 08:25:37.610529988 +0000 UTC m=+0.678696680 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13) Feb 23 03:25:37 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:25:41 localhost sshd[85633]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:25:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:25:43 localhost podman[85635]: 2026-02-23 08:25:42.999335032 +0000 UTC m=+0.080431533 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, batch=17.1_20260112.1, vcs-type=git, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:25:43 localhost podman[85635]: 2026-02-23 08:25:43.178717051 +0000 UTC m=+0.259813482 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, config_id=tripleo_step1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Feb 23 03:25:43 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:26:08 localhost systemd[1]: tmp-crun.tsMI7T.mount: Deactivated successfully. Feb 23 03:26:08 localhost podman[85743]: 2026-02-23 08:26:08.051024022 +0000 UTC m=+0.107863271 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 23 03:26:08 localhost podman[85771]: 2026-02-23 08:26:08.089013315 +0000 UTC m=+0.126717019 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team) Feb 23 03:26:08 localhost podman[85761]: 2026-02-23 08:26:08.098706911 +0000 UTC m=+0.140492369 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.openshift.expose-services=, container_name=nova_compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5) Feb 23 03:26:08 localhost podman[85741]: 2026-02-23 08:26:08.062820193 +0000 UTC m=+0.122444717 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, version=17.1.13, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, batch=17.1_20260112.1) Feb 23 03:26:08 localhost podman[85755]: 2026-02-23 08:26:08.124034857 +0000 UTC m=+0.167840238 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, distribution-scope=public, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid) Feb 23 03:26:08 localhost podman[85755]: 2026-02-23 08:26:08.131254908 +0000 UTC m=+0.175060279 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, config_id=tripleo_step3, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true) Feb 23 03:26:08 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:26:08 localhost podman[85741]: 2026-02-23 08:26:08.142965526 +0000 UTC m=+0.202590070 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:26:08 localhost podman[85771]: 2026-02-23 08:26:08.158993577 +0000 UTC m=+0.196697281 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, tcib_managed=true, build-date=2026-01-12T23:07:47Z, version=17.1.13, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, maintainer=OpenStack TripleO Team) Feb 23 03:26:08 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:26:08 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:26:08 localhost podman[85761]: 2026-02-23 08:26:08.17968779 +0000 UTC m=+0.221473248 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container) Feb 23 03:26:08 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:26:08 localhost podman[85742]: 2026-02-23 08:26:08.260698469 +0000 UTC m=+0.320107656 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.expose-services=, container_name=ovn_controller, io.buildah.version=1.41.5, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 23 03:26:08 localhost podman[85744]: 2026-02-23 08:26:08.304143349 +0000 UTC m=+0.353362815 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com) Feb 23 03:26:08 localhost podman[85744]: 2026-02-23 08:26:08.334720583 +0000 UTC m=+0.383940029 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1) Feb 23 03:26:08 localhost podman[85742]: 2026-02-23 08:26:08.356277763 +0000 UTC m=+0.415686960 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com) Feb 23 03:26:08 localhost podman[85743]: 2026-02-23 08:26:08.379932918 +0000 UTC m=+0.436772227 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, release=1766032510, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public) Feb 23 03:26:08 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:26:08 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:26:08 localhost podman[85767]: 2026-02-23 08:26:08.357512832 +0000 UTC m=+0.389459049 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, version=17.1.13, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:26:08 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:26:08 localhost podman[85779]: 2026-02-23 08:26:08.387820958 +0000 UTC m=+0.417074523 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step4, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, vcs-type=git) Feb 23 03:26:08 localhost podman[85767]: 2026-02-23 08:26:08.502838148 +0000 UTC m=+0.534784355 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Feb 23 03:26:08 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:26:08 localhost podman[85779]: 2026-02-23 08:26:08.530759172 +0000 UTC m=+0.560012667 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi) Feb 23 03:26:08 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:26:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:26:14 localhost podman[85943]: 2026-02-23 08:26:13.999968238 +0000 UTC m=+0.077780032 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step1) Feb 23 03:26:14 localhost podman[85943]: 2026-02-23 08:26:14.157668713 +0000 UTC m=+0.235480487 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.5, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z) Feb 23 03:26:14 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:26:29 localhost sshd[86017]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:26:39 localhost podman[86040]: 2026-02-23 08:26:39.059051934 +0000 UTC m=+0.119465447 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_compute, io.openshift.expose-services=, config_id=tripleo_step5, release=1766032510, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:26:39 localhost podman[86040]: 2026-02-23 08:26:39.077757337 +0000 UTC m=+0.138170850 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, architecture=x86_64) Feb 23 03:26:39 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:26:39 localhost podman[86051]: 2026-02-23 08:26:39.114071008 +0000 UTC m=+0.167433855 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:26:39 localhost podman[86051]: 2026-02-23 08:26:39.127625792 +0000 UTC m=+0.180988639 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 23 03:26:39 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:26:39 localhost podman[86028]: 2026-02-23 08:26:39.164247243 +0000 UTC m=+0.226292865 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:26:39 localhost podman[86028]: 2026-02-23 08:26:39.169066651 +0000 UTC m=+0.231112243 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, container_name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:26:39 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:26:39 localhost podman[86021]: 2026-02-23 08:26:39.210140508 +0000 UTC m=+0.277739140 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64) Feb 23 03:26:39 localhost podman[86020]: 2026-02-23 08:26:39.222489755 +0000 UTC m=+0.291888853 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1766032510, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, tcib_managed=true, config_id=tripleo_step4, container_name=ovn_controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5) Feb 23 03:26:39 localhost podman[86041]: 2026-02-23 08:26:39.032104869 +0000 UTC m=+0.089693406 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, release=1766032510, io.openshift.expose-services=) Feb 23 03:26:39 localhost podman[86041]: 2026-02-23 08:26:39.260648834 +0000 UTC m=+0.318237401 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:26:39 localhost podman[86058]: 2026-02-23 08:26:39.266507943 +0000 UTC m=+0.320035535 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.) Feb 23 03:26:39 localhost podman[86020]: 2026-02-23 08:26:39.26934986 +0000 UTC m=+0.338748968 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5) Feb 23 03:26:39 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:26:39 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:26:39 localhost podman[86058]: 2026-02-23 08:26:39.313336246 +0000 UTC m=+0.366863848 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, container_name=ceilometer_agent_ipmi, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, build-date=2026-01-12T23:07:30Z) Feb 23 03:26:39 localhost podman[86019]: 2026-02-23 08:26:39.320452243 +0000 UTC m=+0.390594673 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, container_name=collectd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:26:39 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:26:39 localhost podman[86019]: 2026-02-23 08:26:39.330736688 +0000 UTC m=+0.400879108 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 23 03:26:39 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:26:39 localhost podman[86022]: 2026-02-23 08:26:39.361439378 +0000 UTC m=+0.418312122 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, release=1766032510, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 23 03:26:39 localhost podman[86022]: 2026-02-23 08:26:39.397851632 +0000 UTC m=+0.454724436 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team) Feb 23 03:26:39 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:26:39 localhost podman[86021]: 2026-02-23 08:26:39.586901796 +0000 UTC m=+0.654500498 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 23 03:26:39 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:26:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:26:45 localhost podman[86224]: 2026-02-23 08:26:45.00311346 +0000 UTC m=+0.076916956 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, tcib_managed=true, version=17.1.13, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.) Feb 23 03:26:45 localhost podman[86224]: 2026-02-23 08:26:45.196103585 +0000 UTC m=+0.269907081 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 23 03:26:45 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:26:56 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:26:56 localhost recover_tripleo_nova_virtqemud[86269]: 62457 Feb 23 03:26:56 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:26:56 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:27:10 localhost podman[86352]: 2026-02-23 08:27:10.047929308 +0000 UTC m=+0.096056171 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:27:10 localhost podman[86352]: 2026-02-23 08:27:10.055476819 +0000 UTC m=+0.103603712 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5) Feb 23 03:27:10 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:27:10 localhost systemd[1]: tmp-crun.TSiUrN.mount: Deactivated successfully. Feb 23 03:27:10 localhost podman[86369]: 2026-02-23 08:27:10.096884965 +0000 UTC m=+0.139358675 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:27:10 localhost systemd[1]: tmp-crun.ZIML4U.mount: Deactivated successfully. Feb 23 03:27:10 localhost podman[86351]: 2026-02-23 08:27:10.135892099 +0000 UTC m=+0.191985616 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, release=1766032510, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:27:10 localhost podman[86334]: 2026-02-23 08:27:10.138684174 +0000 UTC m=+0.197675309 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, container_name=ovn_metadata_agent, tcib_managed=true, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 23 03:27:10 localhost podman[86369]: 2026-02-23 08:27:10.147740152 +0000 UTC m=+0.190213902 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z) Feb 23 03:27:10 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:27:10 localhost podman[86333]: 2026-02-23 08:27:10.182054672 +0000 UTC m=+0.247967019 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, distribution-scope=public) Feb 23 03:27:10 localhost podman[86334]: 2026-02-23 08:27:10.19178926 +0000 UTC m=+0.250780395 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 23 03:27:10 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:27:10 localhost podman[86335]: 2026-02-23 08:27:10.256284843 +0000 UTC m=+0.314768983 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=iscsid) Feb 23 03:27:10 localhost podman[86335]: 2026-02-23 08:27:10.264856365 +0000 UTC m=+0.323340505 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, container_name=iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:27:10 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:27:10 localhost podman[86331]: 2026-02-23 08:27:10.230756152 +0000 UTC m=+0.297508705 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, container_name=collectd, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 23 03:27:10 localhost podman[86351]: 2026-02-23 08:27:10.285723574 +0000 UTC m=+0.341817091 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_id=tripleo_step5, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:27:10 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:27:10 localhost podman[86366]: 2026-02-23 08:27:10.352617982 +0000 UTC m=+0.390932955 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Feb 23 03:27:10 localhost podman[86331]: 2026-02-23 08:27:10.365382232 +0000 UTC m=+0.432134835 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, config_id=tripleo_step3, distribution-scope=public) Feb 23 03:27:10 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:27:10 localhost podman[86366]: 2026-02-23 08:27:10.379779932 +0000 UTC m=+0.418094905 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.) Feb 23 03:27:10 localhost podman[86332]: 2026-02-23 08:27:10.336197499 +0000 UTC m=+0.402384705 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1766032510, tcib_managed=true, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container) Feb 23 03:27:10 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:27:10 localhost podman[86332]: 2026-02-23 08:27:10.418943561 +0000 UTC m=+0.485130797 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:27:10 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:27:10 localhost podman[86333]: 2026-02-23 08:27:10.549116945 +0000 UTC m=+0.615029322 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, build-date=2026-01-12T23:32:04Z) Feb 23 03:27:10 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:27:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:27:16 localhost podman[86539]: 2026-02-23 08:27:16.003920957 +0000 UTC m=+0.080124153 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1766032510, architecture=x86_64, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z) Feb 23 03:27:16 localhost podman[86539]: 2026-02-23 08:27:16.215100689 +0000 UTC m=+0.291303825 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_id=tripleo_step1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:27:16 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:27:18 localhost sshd[86568]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:27:41 localhost podman[86622]: 2026-02-23 08:27:41.051923941 +0000 UTC m=+0.096908396 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, release=1766032510, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:27:41 localhost podman[86616]: 2026-02-23 08:27:41.079774183 +0000 UTC m=+0.145721770 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, container_name=ovn_controller, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:27:41 localhost podman[86615]: 2026-02-23 08:27:41.084822338 +0000 UTC m=+0.152143897 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_id=tripleo_step3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, distribution-scope=public, container_name=collectd, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, architecture=x86_64, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510) Feb 23 03:27:41 localhost podman[86615]: 2026-02-23 08:27:41.120776778 +0000 UTC m=+0.188098327 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, container_name=collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13) Feb 23 03:27:41 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:27:41 localhost podman[86617]: 2026-02-23 08:27:41.132095094 +0000 UTC m=+0.195059260 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 23 03:27:41 localhost podman[86622]: 2026-02-23 08:27:41.159905905 +0000 UTC m=+0.204890360 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, container_name=iscsid, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public) Feb 23 03:27:41 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:27:41 localhost podman[86616]: 2026-02-23 08:27:41.174650786 +0000 UTC m=+0.240598363 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true) Feb 23 03:27:41 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:27:41 localhost podman[86643]: 2026-02-23 08:27:41.161967379 +0000 UTC m=+0.199197128 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1766032510, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container) Feb 23 03:27:41 localhost podman[86618]: 2026-02-23 08:27:41.223581893 +0000 UTC m=+0.283329531 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, release=1766032510, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:27:41 localhost podman[86618]: 2026-02-23 08:27:41.306604584 +0000 UTC m=+0.366352252 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5) Feb 23 03:27:41 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:27:41 localhost podman[86638]: 2026-02-23 08:27:41.279548056 +0000 UTC m=+0.319979283 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron) Feb 23 03:27:41 localhost podman[86630]: 2026-02-23 08:27:41.296194515 +0000 UTC m=+0.350410364 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_id=tripleo_step5, managed_by=tripleo_ansible) Feb 23 03:27:41 localhost podman[86643]: 2026-02-23 08:27:41.34730915 +0000 UTC m=+0.384538919 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.) Feb 23 03:27:41 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:27:41 localhost podman[86638]: 2026-02-23 08:27:41.361821504 +0000 UTC m=+0.402252671 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=logrotate_crond, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 23 03:27:41 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:27:41 localhost podman[86650]: 2026-02-23 08:27:41.313277089 +0000 UTC m=+0.352367834 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, version=17.1.13) Feb 23 03:27:41 localhost podman[86630]: 2026-02-23 08:27:41.425531654 +0000 UTC m=+0.479747473 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.buildah.version=1.41.5, container_name=nova_compute, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:27:41 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:27:41 localhost podman[86650]: 2026-02-23 08:27:41.446916528 +0000 UTC m=+0.486007293 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:27:41 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:27:41 localhost podman[86617]: 2026-02-23 08:27:41.539172871 +0000 UTC m=+0.602137027 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_migration_target, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:27:41 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:27:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:27:47 localhost podman[86817]: 2026-02-23 08:27:47.006951 +0000 UTC m=+0.083765994 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, version=17.1.13, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true) Feb 23 03:27:47 localhost podman[86817]: 2026-02-23 08:27:47.205015672 +0000 UTC m=+0.281830646 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:27:47 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:28:08 localhost sshd[86922]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:28:12 localhost podman[86926]: 2026-02-23 08:28:12.037503894 +0000 UTC m=+0.093853324 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.buildah.version=1.41.5, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510) Feb 23 03:28:12 localhost systemd[1]: tmp-crun.afoNYz.mount: Deactivated successfully. Feb 23 03:28:12 localhost podman[86925]: 2026-02-23 08:28:12.096606612 +0000 UTC m=+0.158475720 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:28:12 localhost podman[86951]: 2026-02-23 08:28:12.053094961 +0000 UTC m=+0.090059537 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 23 03:28:12 localhost podman[86945]: 2026-02-23 08:28:12.09981428 +0000 UTC m=+0.142427769 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, vcs-type=git, batch=17.1_20260112.1, release=1766032510, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., container_name=logrotate_crond, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4) Feb 23 03:28:12 localhost podman[86952]: 2026-02-23 08:28:12.160422815 +0000 UTC m=+0.192159051 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 23 03:28:12 localhost podman[86945]: 2026-02-23 08:28:12.184800852 +0000 UTC m=+0.227414311 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, tcib_managed=true, name=rhosp-rhel9/openstack-cron, version=17.1.13, batch=17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 23 03:28:12 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:28:12 localhost podman[86924]: 2026-02-23 08:28:12.226812577 +0000 UTC m=+0.288504949 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, release=1766032510, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team) Feb 23 03:28:12 localhost podman[86924]: 2026-02-23 08:28:12.236700299 +0000 UTC m=+0.298392651 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, url=https://www.redhat.com) Feb 23 03:28:12 localhost podman[86938]: 2026-02-23 08:28:12.25272873 +0000 UTC m=+0.294137542 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container) Feb 23 03:28:12 localhost podman[86952]: 2026-02-23 08:28:12.255751773 +0000 UTC m=+0.287488029 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 23 03:28:12 localhost podman[86938]: 2026-02-23 08:28:12.262760127 +0000 UTC m=+0.304168959 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=iscsid, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z) Feb 23 03:28:12 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:28:12 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:28:12 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:28:12 localhost podman[86943]: 2026-02-23 08:28:12.302030528 +0000 UTC m=+0.335763895 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:28:12 localhost podman[86925]: 2026-02-23 08:28:12.338697771 +0000 UTC m=+0.400566819 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.openshift.expose-services=, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:28:12 localhost podman[86951]: 2026-02-23 08:28:12.33899292 +0000 UTC m=+0.375957506 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Feb 23 03:28:12 localhost podman[86943]: 2026-02-23 08:28:12.346678295 +0000 UTC m=+0.380411632 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, config_id=tripleo_step5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, release=1766032510) Feb 23 03:28:12 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:28:12 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:28:12 localhost podman[86926]: 2026-02-23 08:28:12.374819766 +0000 UTC m=+0.431169126 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., container_name=nova_migration_target, io.buildah.version=1.41.5, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 23 03:28:12 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:28:12 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:28:12 localhost podman[86930]: 2026-02-23 08:28:12.138916657 +0000 UTC m=+0.188780197 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.13, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:28:12 localhost podman[86930]: 2026-02-23 08:28:12.47268041 +0000 UTC m=+0.522543980 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:28:12 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:28:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:28:18 localhost podman[87123]: 2026-02-23 08:28:18.009401551 +0000 UTC m=+0.084088074 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr) Feb 23 03:28:18 localhost podman[87123]: 2026-02-23 08:28:18.193789683 +0000 UTC m=+0.268476136 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git) Feb 23 03:28:18 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:28:43 localhost podman[87225]: 2026-02-23 08:28:43.072087794 +0000 UTC m=+0.117212088 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container) Feb 23 03:28:43 localhost podman[87200]: 2026-02-23 08:28:43.047895155 +0000 UTC m=+0.112325899 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-type=git, release=1766032510, build-date=2026-01-12T23:32:04Z, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:28:43 localhost podman[87225]: 2026-02-23 08:28:43.129819731 +0000 UTC m=+0.174944055 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:28:43 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:28:43 localhost podman[87198]: 2026-02-23 08:28:43.146928995 +0000 UTC m=+0.216757504 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, build-date=2026-01-12T22:10:15Z, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, managed_by=tripleo_ansible) Feb 23 03:28:43 localhost podman[87198]: 2026-02-23 08:28:43.154782735 +0000 UTC m=+0.224611254 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step3, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:28:43 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:28:43 localhost podman[87231]: 2026-02-23 08:28:43.189671953 +0000 UTC m=+0.231358921 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:28:43 localhost podman[87231]: 2026-02-23 08:28:43.213211903 +0000 UTC m=+0.254898881 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:28:43 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:28:43 localhost podman[87199]: 2026-02-23 08:28:43.257936872 +0000 UTC m=+0.324436339 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:28:43 localhost podman[87201]: 2026-02-23 08:28:43.301880486 +0000 UTC m=+0.357610394 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.5) Feb 23 03:28:43 localhost podman[87199]: 2026-02-23 08:28:43.311863972 +0000 UTC m=+0.378363439 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:28:43 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:28:43 localhost podman[87212]: 2026-02-23 08:28:43.354876348 +0000 UTC m=+0.408540863 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.5, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:28:43 localhost podman[87212]: 2026-02-23 08:28:43.381708359 +0000 UTC m=+0.435372864 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 23 03:28:43 localhost podman[87201]: 2026-02-23 08:28:43.383730601 +0000 UTC m=+0.439460459 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:28:43 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:28:43 localhost podman[87207]: 2026-02-23 08:28:43.114819883 +0000 UTC m=+0.162021850 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, com.redhat.component=openstack-iscsid-container) Feb 23 03:28:43 localhost podman[87200]: 2026-02-23 08:28:43.4297596 +0000 UTC m=+0.494190334 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, tcib_managed=true, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4) Feb 23 03:28:43 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:28:43 localhost podman[87207]: 2026-02-23 08:28:43.456989063 +0000 UTC m=+0.504191080 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team) Feb 23 03:28:43 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:28:43 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:28:43 localhost podman[87219]: 2026-02-23 08:28:43.501864216 +0000 UTC m=+0.546947039 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:28:43 localhost podman[87219]: 2026-02-23 08:28:43.511755959 +0000 UTC m=+0.556838812 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, container_name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git) Feb 23 03:28:43 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:28:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:28:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:28:48 localhost recover_tripleo_nova_virtqemud[87408]: 62457 Feb 23 03:28:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:28:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:28:49 localhost podman[87406]: 2026-02-23 08:28:49.005333318 +0000 UTC m=+0.084513357 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, container_name=metrics_qdr) Feb 23 03:28:49 localhost podman[87406]: 2026-02-23 08:28:49.195560839 +0000 UTC m=+0.274740878 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., container_name=metrics_qdr) Feb 23 03:28:49 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:28:56 localhost sshd[87438]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:29:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:29:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5036 writes, 22K keys, 5036 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5036 writes, 634 syncs, 7.94 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 341 writes, 1119 keys, 341 commit groups, 1.0 writes per commit group, ingest: 1.27 MB, 0.00 MB/s#012Interval WAL: 341 writes, 144 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:29:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:29:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5650 writes, 24K keys, 5650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5650 writes, 811 syncs, 6.97 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 727 writes, 2916 keys, 727 commit groups, 1.0 writes per commit group, ingest: 3.74 MB, 0.01 MB/s#012Interval WAL: 727 writes, 253 syncs, 2.87 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:29:14 localhost systemd[1]: tmp-crun.eEIBpS.mount: Deactivated successfully. Feb 23 03:29:14 localhost podman[87536]: 2026-02-23 08:29:14.063383614 +0000 UTC m=+0.107743908 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_id=tripleo_step5, io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:29:14 localhost podman[87517]: 2026-02-23 08:29:14.042232387 +0000 UTC m=+0.105075037 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:29:14 localhost systemd[1]: tmp-crun.i7ApS8.mount: Deactivated successfully. Feb 23 03:29:14 localhost podman[87536]: 2026-02-23 08:29:14.112671902 +0000 UTC m=+0.157032176 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Feb 23 03:29:14 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:29:14 localhost podman[87555]: 2026-02-23 08:29:14.11980043 +0000 UTC m=+0.143423620 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:29:14 localhost podman[87530]: 2026-02-23 08:29:14.149106627 +0000 UTC m=+0.195615937 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 23 03:29:14 localhost podman[87518]: 2026-02-23 08:29:14.107123033 +0000 UTC m=+0.165016521 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, container_name=nova_migration_target, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:29:14 localhost podman[87555]: 2026-02-23 08:29:14.20573038 +0000 UTC m=+0.229353580 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi) Feb 23 03:29:14 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:29:14 localhost podman[87517]: 2026-02-23 08:29:14.22961055 +0000 UTC m=+0.292453210 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:29:14 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:29:14 localhost podman[87549]: 2026-02-23 08:29:14.206775852 +0000 UTC m=+0.233743764 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:29:14 localhost podman[87519]: 2026-02-23 08:29:14.264520889 +0000 UTC m=+0.320714536 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.13, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 23 03:29:14 localhost podman[87543]: 2026-02-23 08:29:14.305632967 +0000 UTC m=+0.341965306 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, tcib_managed=true, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:29:14 localhost podman[87530]: 2026-02-23 08:29:14.339029268 +0000 UTC m=+0.385538588 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, config_id=tripleo_step3, tcib_managed=true) Feb 23 03:29:14 localhost podman[87516]: 2026-02-23 08:29:14.346789886 +0000 UTC m=+0.410181062 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, release=1766032510, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=) Feb 23 03:29:14 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:29:14 localhost podman[87516]: 2026-02-23 08:29:14.361732964 +0000 UTC m=+0.425124160 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, release=1766032510, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd) Feb 23 03:29:14 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:29:14 localhost podman[87519]: 2026-02-23 08:29:14.386040197 +0000 UTC m=+0.442233844 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, vendor=Red Hat, Inc., release=1766032510, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent) Feb 23 03:29:14 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:29:14 localhost podman[87549]: 2026-02-23 08:29:14.437012057 +0000 UTC m=+0.463979969 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:29:14 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:29:14 localhost podman[87518]: 2026-02-23 08:29:14.451347116 +0000 UTC m=+0.509240604 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public) Feb 23 03:29:14 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:29:14 localhost podman[87543]: 2026-02-23 08:29:14.491123093 +0000 UTC m=+0.527455482 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, container_name=logrotate_crond, release=1766032510) Feb 23 03:29:14 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:29:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:29:19 localhost podman[87718]: 2026-02-23 08:29:19.983789914 +0000 UTC m=+0.062302937 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 23 03:29:20 localhost podman[87718]: 2026-02-23 08:29:20.184180047 +0000 UTC m=+0.262693120 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 23 03:29:20 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:29:43 localhost sshd[87791]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:29:45 localhost systemd[1]: tmp-crun.gMnUVE.mount: Deactivated successfully. Feb 23 03:29:45 localhost podman[87793]: 2026-02-23 08:29:45.039618219 +0000 UTC m=+0.105762677 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Feb 23 03:29:45 localhost podman[87793]: 2026-02-23 08:29:45.045588043 +0000 UTC m=+0.111732501 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, architecture=x86_64, release=1766032510, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, container_name=collectd, distribution-scope=public) Feb 23 03:29:45 localhost podman[87829]: 2026-02-23 08:29:45.053927178 +0000 UTC m=+0.089880862 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64) Feb 23 03:29:45 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:29:45 localhost podman[87829]: 2026-02-23 08:29:45.075680173 +0000 UTC m=+0.111633827 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, architecture=x86_64) Feb 23 03:29:45 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:29:45 localhost podman[87802]: 2026-02-23 08:29:45.087518206 +0000 UTC m=+0.140799561 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Feb 23 03:29:45 localhost podman[87809]: 2026-02-23 08:29:45.145986644 +0000 UTC m=+0.190218921 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, batch=17.1_20260112.1, vcs-type=git, container_name=nova_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, distribution-scope=public, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:29:45 localhost podman[87809]: 2026-02-23 08:29:45.192399535 +0000 UTC m=+0.236631792 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, config_id=tripleo_step5, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:29:45 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:29:45 localhost podman[87802]: 2026-02-23 08:29:45.224308701 +0000 UTC m=+0.277590076 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, release=1766032510, container_name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 23 03:29:45 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:29:45 localhost podman[87794]: 2026-02-23 08:29:45.197295205 +0000 UTC m=+0.259224914 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, container_name=ovn_controller, release=1766032510, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public) Feb 23 03:29:45 localhost podman[87820]: 2026-02-23 08:29:45.212638234 +0000 UTC m=+0.250041492 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 23 03:29:45 localhost podman[87826]: 2026-02-23 08:29:45.272857567 +0000 UTC m=+0.310603146 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1766032510, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:29:45 localhost podman[87794]: 2026-02-23 08:29:45.277926502 +0000 UTC m=+0.339856261 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:29:45 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:29:45 localhost podman[87820]: 2026-02-23 08:29:45.295702966 +0000 UTC m=+0.333106214 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, distribution-scope=public, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13) Feb 23 03:29:45 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:29:45 localhost podman[87795]: 2026-02-23 08:29:45.229072757 +0000 UTC m=+0.287859130 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, build-date=2026-01-12T23:32:04Z, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:29:45 localhost podman[87796]: 2026-02-23 08:29:45.128598252 +0000 UTC m=+0.185293731 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:29:45 localhost podman[87826]: 2026-02-23 08:29:45.347714217 +0000 UTC m=+0.385459846 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.41.5, vcs-type=git) Feb 23 03:29:45 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:29:45 localhost podman[87796]: 2026-02-23 08:29:45.413855951 +0000 UTC m=+0.470551520 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:29:45 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:29:45 localhost podman[87795]: 2026-02-23 08:29:45.567822133 +0000 UTC m=+0.626608576 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.13, container_name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:29:45 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:29:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:29:50 localhost podman[87993]: 2026-02-23 08:29:50.998651603 +0000 UTC m=+0.074949005 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:29:51 localhost podman[87993]: 2026-02-23 08:29:51.185944554 +0000 UTC m=+0.262241926 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64) Feb 23 03:29:51 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:30:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:30:15 localhost recover_tripleo_nova_virtqemud[88210]: 62457 Feb 23 03:30:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:30:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:30:16 localhost podman[88150]: 2026-02-23 08:30:16.032449914 +0000 UTC m=+0.093235134 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, architecture=x86_64, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:30:16 localhost podman[88156]: 2026-02-23 08:30:16.086398574 +0000 UTC m=+0.142831492 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, container_name=iscsid, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:30:16 localhost podman[88156]: 2026-02-23 08:30:16.093127381 +0000 UTC m=+0.149560289 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, architecture=x86_64, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 23 03:30:16 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:30:16 localhost podman[88150]: 2026-02-23 08:30:16.108601154 +0000 UTC m=+0.169386364 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:30:16 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:30:16 localhost podman[88180]: 2026-02-23 08:30:16.154101186 +0000 UTC m=+0.197068231 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:30:16 localhost podman[88149]: 2026-02-23 08:30:16.198296258 +0000 UTC m=+0.258781919 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 23 03:30:16 localhost podman[88180]: 2026-02-23 08:30:16.203831888 +0000 UTC m=+0.246798943 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:30:16 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:30:16 localhost podman[88172]: 2026-02-23 08:30:16.295256446 +0000 UTC m=+0.332470255 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5) Feb 23 03:30:16 localhost podman[88172]: 2026-02-23 08:30:16.351221169 +0000 UTC m=+0.388434998 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Feb 23 03:30:16 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:30:16 localhost podman[88178]: 2026-02-23 08:30:16.304521979 +0000 UTC m=+0.351099374 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, distribution-scope=public, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team) Feb 23 03:30:16 localhost podman[88186]: 2026-02-23 08:30:16.262576566 +0000 UTC m=+0.300384633 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true) Feb 23 03:30:16 localhost podman[88178]: 2026-02-23 08:30:16.436836798 +0000 UTC m=+0.483414133 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc.) Feb 23 03:30:16 localhost podman[88186]: 2026-02-23 08:30:16.445988788 +0000 UTC m=+0.483796765 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible) Feb 23 03:30:16 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:30:16 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:30:16 localhost podman[88148]: 2026-02-23 08:30:16.355960173 +0000 UTC m=+0.419210319 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z) Feb 23 03:30:16 localhost podman[88147]: 2026-02-23 08:30:16.488593352 +0000 UTC m=+0.552281751 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:30:16 localhost podman[88147]: 2026-02-23 08:30:16.503810578 +0000 UTC m=+0.567499057 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., container_name=collectd, io.openshift.expose-services=, architecture=x86_64, release=1766032510, tcib_managed=true, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:30:16 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:30:16 localhost podman[88148]: 2026-02-23 08:30:16.540305534 +0000 UTC m=+0.603555640 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:30:16 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:30:16 localhost podman[88149]: 2026-02-23 08:30:16.601711903 +0000 UTC m=+0.662197494 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:30:16 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:30:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:30:22 localhost podman[88346]: 2026-02-23 08:30:22.025960601 +0000 UTC m=+0.100046972 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:30:22 localhost podman[88346]: 2026-02-23 08:30:22.216951265 +0000 UTC m=+0.291037626 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64) Feb 23 03:30:22 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:30:33 localhost sshd[88421]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:30:47 localhost podman[88423]: 2026-02-23 08:30:47.038324087 +0000 UTC m=+0.102848808 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Feb 23 03:30:47 localhost podman[88423]: 2026-02-23 08:30:47.047735505 +0000 UTC m=+0.112260226 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:30:47 localhost systemd[1]: tmp-crun.KKhGWi.mount: Deactivated successfully. Feb 23 03:30:47 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:30:47 localhost podman[88456]: 2026-02-23 08:30:47.059731613 +0000 UTC m=+0.092823543 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, release=1766032510, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Feb 23 03:30:47 localhost podman[88456]: 2026-02-23 08:30:47.082667485 +0000 UTC m=+0.115759415 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:30:47 localhost podman[88444]: 2026-02-23 08:30:47.092602298 +0000 UTC m=+0.134624561 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z) Feb 23 03:30:47 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:30:47 localhost podman[88438]: 2026-02-23 08:30:47.132701916 +0000 UTC m=+0.186145008 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, container_name=iscsid, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com) Feb 23 03:30:47 localhost podman[88438]: 2026-02-23 08:30:47.167969604 +0000 UTC m=+0.221412686 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, container_name=iscsid, architecture=x86_64, version=17.1.13, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 23 03:30:47 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:30:47 localhost podman[88444]: 2026-02-23 08:30:47.191873586 +0000 UTC m=+0.233895839 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:30:47 localhost podman[88455]: 2026-02-23 08:30:47.161363693 +0000 UTC m=+0.196687510 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, version=17.1.13) Feb 23 03:30:47 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:30:47 localhost podman[88426]: 2026-02-23 08:30:47.222664249 +0000 UTC m=+0.276566285 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:30:47 localhost podman[88425]: 2026-02-23 08:30:47.247290481 +0000 UTC m=+0.304904750 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:30:47 localhost podman[88455]: 2026-02-23 08:30:47.247733636 +0000 UTC m=+0.283057453 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.13, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:30:47 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:30:47 localhost podman[88426]: 2026-02-23 08:30:47.273932087 +0000 UTC m=+0.327834123 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, version=17.1.13, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:30:47 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:30:47 localhost podman[88424]: 2026-02-23 08:30:47.234242433 +0000 UTC m=+0.296263717 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., container_name=ovn_controller, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13) Feb 23 03:30:47 localhost podman[88463]: 2026-02-23 08:30:47.300330205 +0000 UTC m=+0.330741912 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, distribution-scope=public, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 23 03:30:47 localhost podman[88463]: 2026-02-23 08:30:47.325040071 +0000 UTC m=+0.355451848 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1) Feb 23 03:30:47 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:30:47 localhost podman[88424]: 2026-02-23 08:30:47.368838762 +0000 UTC m=+0.430860096 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true) Feb 23 03:30:47 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:30:47 localhost podman[88425]: 2026-02-23 08:30:47.582967474 +0000 UTC m=+0.640581823 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:30:47 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:30:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:30:53 localhost systemd[1]: tmp-crun.83gGec.mount: Deactivated successfully. Feb 23 03:30:53 localhost podman[88632]: 2026-02-23 08:30:53.009701587 +0000 UTC m=+0.084569330 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, build-date=2026-01-12T22:10:14Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510) Feb 23 03:30:53 localhost podman[88632]: 2026-02-23 08:30:53.205104747 +0000 UTC m=+0.279972490 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, release=1766032510) Feb 23 03:30:53 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:31:18 localhost systemd[1]: tmp-crun.MKQwZH.mount: Deactivated successfully. Feb 23 03:31:18 localhost podman[88742]: 2026-02-23 08:31:18.043720685 +0000 UTC m=+0.103252220 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=ovn_metadata_agent, version=17.1.13, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:31:18 localhost systemd[1]: tmp-crun.dPJLGI.mount: Deactivated successfully. Feb 23 03:31:18 localhost podman[88771]: 2026-02-23 08:31:18.061410346 +0000 UTC m=+0.103390495 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, release=1766032510, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:31:18 localhost podman[88771]: 2026-02-23 08:31:18.0857132 +0000 UTC m=+0.127693389 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.) Feb 23 03:31:18 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:31:18 localhost podman[88740]: 2026-02-23 08:31:18.089759694 +0000 UTC m=+0.157595083 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.13, build-date=2026-01-12T22:36:40Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:31:18 localhost podman[88741]: 2026-02-23 08:31:18.189794645 +0000 UTC m=+0.251933290 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, version=17.1.13, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:31:18 localhost podman[88753]: 2026-02-23 08:31:18.143386545 +0000 UTC m=+0.198185645 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.13, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 23 03:31:18 localhost podman[88766]: 2026-02-23 08:31:18.240864618 +0000 UTC m=+0.290190582 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=logrotate_crond, io.buildah.version=1.41.5, architecture=x86_64) Feb 23 03:31:18 localhost podman[88740]: 2026-02-23 08:31:18.27100574 +0000 UTC m=+0.338841169 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:31:18 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:31:18 localhost podman[88742]: 2026-02-23 08:31:18.322503686 +0000 UTC m=+0.382035181 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.buildah.version=1.41.5, container_name=ovn_metadata_agent, architecture=x86_64, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:31:18 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:31:18 localhost podman[88777]: 2026-02-23 08:31:18.392867829 +0000 UTC m=+0.431556227 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi) Feb 23 03:31:18 localhost podman[88777]: 2026-02-23 08:31:18.418661249 +0000 UTC m=+0.457349647 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.5) Feb 23 03:31:18 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:31:18 localhost podman[88755]: 2026-02-23 08:31:18.458710444 +0000 UTC m=+0.511871875 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Feb 23 03:31:18 localhost podman[88739]: 2026-02-23 08:31:18.500619467 +0000 UTC m=+0.570080477 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., architecture=x86_64, container_name=collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:31:18 localhost podman[88739]: 2026-02-23 08:31:18.508446087 +0000 UTC m=+0.577907127 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible) Feb 23 03:31:18 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:31:18 localhost podman[88766]: 2026-02-23 08:31:18.526842369 +0000 UTC m=+0.576168323 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, url=https://www.redhat.com, tcib_managed=true, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:31:18 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:31:18 localhost podman[88741]: 2026-02-23 08:31:18.552607357 +0000 UTC m=+0.614745932 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=nova_migration_target, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc.) Feb 23 03:31:18 localhost podman[88755]: 2026-02-23 08:31:18.562392547 +0000 UTC m=+0.615553898 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:31:18 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:31:18 localhost podman[88753]: 2026-02-23 08:31:18.578264213 +0000 UTC m=+0.633063373 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:31:18 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:31:18 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:31:18 localhost sshd[88942]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:31:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:31:23 localhost systemd[1]: tmp-crun.IpZOVN.mount: Deactivated successfully. Feb 23 03:31:23 localhost podman[88944]: 2026-02-23 08:31:23.984129518 +0000 UTC m=+0.065980590 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, release=1766032510, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:31:24 localhost podman[88944]: 2026-02-23 08:31:24.201243022 +0000 UTC m=+0.283094034 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5) Feb 23 03:31:24 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:31:49 localhost systemd[1]: tmp-crun.4aBokH.mount: Deactivated successfully. Feb 23 03:31:49 localhost podman[88996]: 2026-02-23 08:31:49.031584276 +0000 UTC m=+0.100305521 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:31:49 localhost podman[89041]: 2026-02-23 08:31:49.057149408 +0000 UTC m=+0.090571933 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, version=17.1.13, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510) Feb 23 03:31:49 localhost podman[89009]: 2026-02-23 08:31:49.073577331 +0000 UTC m=+0.133028072 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T22:56:19Z, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:31:49 localhost podman[89041]: 2026-02-23 08:31:49.148656688 +0000 UTC m=+0.182079183 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:31:49 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:31:49 localhost podman[89037]: 2026-02-23 08:31:49.09872269 +0000 UTC m=+0.134828187 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510) Feb 23 03:31:49 localhost podman[88997]: 2026-02-23 08:31:49.205112436 +0000 UTC m=+0.270718625 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, version=17.1.13, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 23 03:31:49 localhost podman[89037]: 2026-02-23 08:31:49.233718601 +0000 UTC m=+0.269824098 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z) Feb 23 03:31:49 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:31:49 localhost podman[89025]: 2026-02-23 08:31:49.246184233 +0000 UTC m=+0.289864442 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, distribution-scope=public) Feb 23 03:31:49 localhost podman[89025]: 2026-02-23 08:31:49.25198689 +0000 UTC m=+0.295667159 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, release=1766032510, com.redhat.component=openstack-cron-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, architecture=x86_64, container_name=logrotate_crond, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:31:49 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:31:49 localhost podman[89003]: 2026-02-23 08:31:49.295368537 +0000 UTC m=+0.353224559 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1) Feb 23 03:31:49 localhost podman[89009]: 2026-02-23 08:31:49.304044474 +0000 UTC m=+0.363495245 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:31:49 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:31:49 localhost podman[88997]: 2026-02-23 08:31:49.325862551 +0000 UTC m=+0.391468780 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, version=17.1.13) Feb 23 03:31:49 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:31:49 localhost podman[88996]: 2026-02-23 08:31:49.366702071 +0000 UTC m=+0.435423386 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.5, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3) Feb 23 03:31:49 localhost podman[89024]: 2026-02-23 08:31:49.114508003 +0000 UTC m=+0.164348520 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:31:49 localhost podman[89024]: 2026-02-23 08:31:49.401012951 +0000 UTC m=+0.450853568 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, maintainer=OpenStack TripleO Team, version=17.1.13, release=1766032510, config_id=tripleo_step5, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 03:31:49 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:31:49 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:31:49 localhost podman[89010]: 2026-02-23 08:31:49.402327731 +0000 UTC m=+0.448090753 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20260112.1, container_name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:31:49 localhost podman[89010]: 2026-02-23 08:31:49.487969432 +0000 UTC m=+0.533732484 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64) Feb 23 03:31:49 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:31:49 localhost podman[89003]: 2026-02-23 08:31:49.6258359 +0000 UTC m=+0.683691972 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_id=tripleo_step4, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.13, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:31:49 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:31:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:31:54 localhost systemd[1]: tmp-crun.6gdgfe.mount: Deactivated successfully. Feb 23 03:31:55 localhost podman[89196]: 2026-02-23 08:31:55.003903415 +0000 UTC m=+0.082887527 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, container_name=metrics_qdr) Feb 23 03:31:55 localhost podman[89196]: 2026-02-23 08:31:55.199826201 +0000 UTC m=+0.278810323 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 23 03:31:55 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:32:03 localhost sshd[89225]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:32:04 localhost sshd[89227]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:32:13 localhost podman[89330]: 2026-02-23 08:32:13.100173754 +0000 UTC m=+0.083333582 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.buildah.version=1.42.2, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.) Feb 23 03:32:13 localhost podman[89330]: 2026-02-23 08:32:13.191827889 +0000 UTC m=+0.174987747 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph) Feb 23 03:32:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:32:15 localhost recover_tripleo_nova_virtqemud[89474]: 62457 Feb 23 03:32:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:32:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:32:20 localhost podman[89495]: 2026-02-23 08:32:20.021839864 +0000 UTC m=+0.081671310 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_id=tripleo_step5, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc.) Feb 23 03:32:20 localhost podman[89478]: 2026-02-23 08:32:20.039683799 +0000 UTC m=+0.099835396 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 23 03:32:20 localhost podman[89479]: 2026-02-23 08:32:20.040088282 +0000 UTC m=+0.096012209 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:34:43Z, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, url=https://www.redhat.com, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, vcs-type=git) Feb 23 03:32:20 localhost podman[89475]: 2026-02-23 08:32:20.084832401 +0000 UTC m=+0.153311273 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:32:20 localhost podman[89478]: 2026-02-23 08:32:20.10081839 +0000 UTC m=+0.160969987 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:32:20 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:32:20 localhost podman[89477]: 2026-02-23 08:32:20.128208819 +0000 UTC m=+0.196053141 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, release=1766032510, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:32:20 localhost podman[89510]: 2026-02-23 08:32:20.148491859 +0000 UTC m=+0.196584966 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, container_name=ceilometer_agent_compute, vcs-type=git, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, io.buildah.version=1.41.5, architecture=x86_64, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4) Feb 23 03:32:20 localhost podman[89495]: 2026-02-23 08:32:20.151970595 +0000 UTC m=+0.211802091 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=nova_compute, distribution-scope=public, version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:32:20 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:32:20 localhost podman[89510]: 2026-02-23 08:32:20.179733345 +0000 UTC m=+0.227826462 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:32:20 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:32:20 localhost podman[89476]: 2026-02-23 08:32:20.195335643 +0000 UTC m=+0.263884367 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:32:20 localhost podman[89513]: 2026-02-23 08:32:20.237274326 +0000 UTC m=+0.281316830 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1) Feb 23 03:32:20 localhost podman[89476]: 2026-02-23 08:32:20.267102648 +0000 UTC m=+0.335651412 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, vendor=Red Hat, Inc., release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:32:20 localhost podman[89479]: 2026-02-23 08:32:20.269607955 +0000 UTC m=+0.325531862 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, container_name=iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 23 03:32:20 localhost podman[89513]: 2026-02-23 08:32:20.259798455 +0000 UTC m=+0.303841009 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:32:20 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:32:20 localhost podman[89496]: 2026-02-23 08:32:20.349662355 +0000 UTC m=+0.399344322 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:32:20 localhost podman[89496]: 2026-02-23 08:32:20.356215826 +0000 UTC m=+0.405897793 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1766032510) Feb 23 03:32:20 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:32:20 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:32:20 localhost podman[89475]: 2026-02-23 08:32:20.421248165 +0000 UTC m=+0.489727097 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, release=1766032510, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:32:20 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:32:20 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:32:20 localhost podman[89477]: 2026-02-23 08:32:20.492920629 +0000 UTC m=+0.560765001 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_migration_target, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:32:20 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:32:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:32:25 localhost podman[89680]: 2026-02-23 08:32:25.996731662 +0000 UTC m=+0.071035145 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, container_name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Feb 23 03:32:26 localhost podman[89680]: 2026-02-23 08:32:26.186279622 +0000 UTC m=+0.260583125 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, config_id=tripleo_step1, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, batch=17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team) Feb 23 03:32:26 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:32:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:7f:2b:8f MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.103 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=5668 DPT=50834 SEQ=0 ACK=888618984 WINDOW=0 RES=0x00 ACK RST URGP=0 Feb 23 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:32:51 localhost systemd[1]: tmp-crun.6KV7IR.mount: Deactivated successfully. Feb 23 03:32:51 localhost podman[89736]: 2026-02-23 08:32:51.229068299 +0000 UTC m=+0.089784738 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, release=1766032510, distribution-scope=public, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:32:51 localhost podman[89736]: 2026-02-23 08:32:51.237689563 +0000 UTC m=+0.098406032 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 23 03:32:51 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:32:51 localhost podman[89762]: 2026-02-23 08:32:51.272710334 +0000 UTC m=+0.124763578 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, version=17.1.13, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container) Feb 23 03:32:51 localhost podman[89766]: 2026-02-23 08:32:51.302531537 +0000 UTC m=+0.152448986 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20260112.1) Feb 23 03:32:51 localhost podman[89732]: 2026-02-23 08:32:51.328675087 +0000 UTC m=+0.195104651 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=collectd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, tcib_managed=true, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com) Feb 23 03:32:51 localhost podman[89732]: 2026-02-23 08:32:51.335740714 +0000 UTC m=+0.202170298 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:32:51 localhost podman[89766]: 2026-02-23 08:32:51.342055277 +0000 UTC m=+0.191972736 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:32:51 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:32:51 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:32:51 localhost podman[89762]: 2026-02-23 08:32:51.388770646 +0000 UTC m=+0.240823960 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.13, vcs-type=git, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:32:51 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:32:51 localhost podman[89741]: 2026-02-23 08:32:51.286800525 +0000 UTC m=+0.130210535 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:32:51 localhost podman[89748]: 2026-02-23 08:32:51.390127857 +0000 UTC m=+0.238988805 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron) Feb 23 03:32:51 localhost podman[89733]: 2026-02-23 08:32:51.444061128 +0000 UTC m=+0.298454364 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., container_name=ovn_controller, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5) Feb 23 03:32:51 localhost podman[89735]: 2026-02-23 08:32:51.501542157 +0000 UTC m=+0.356029466 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git) Feb 23 03:32:51 localhost podman[89741]: 2026-02-23 08:32:51.517486115 +0000 UTC m=+0.360896145 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2026-01-12T23:32:04Z, vcs-type=git, distribution-scope=public, config_id=tripleo_step5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute) Feb 23 03:32:51 localhost podman[89748]: 2026-02-23 08:32:51.521033924 +0000 UTC m=+0.369894881 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp-rhel9/openstack-cron, release=1766032510, config_id=tripleo_step4, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=logrotate_crond, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, build-date=2026-01-12T22:10:15Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc.) Feb 23 03:32:51 localhost podman[89733]: 2026-02-23 08:32:51.531672979 +0000 UTC m=+0.386066265 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:32:51 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:32:51 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:32:51 localhost podman[89734]: 2026-02-23 08:32:51.212493092 +0000 UTC m=+0.079556456 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, container_name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:32:51 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:32:51 localhost podman[89735]: 2026-02-23 08:32:51.58757196 +0000 UTC m=+0.442059229 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-type=git, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true) Feb 23 03:32:51 localhost podman[89734]: 2026-02-23 08:32:51.599901387 +0000 UTC m=+0.466964761 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, container_name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container) Feb 23 03:32:51 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:32:51 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:32:52 localhost sshd[89936]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:32:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:32:57 localhost podman[89938]: 2026-02-23 08:32:57.004974539 +0000 UTC m=+0.047629678 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vcs-type=git, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:32:57 localhost podman[89938]: 2026-02-23 08:32:57.326926511 +0000 UTC m=+0.369581650 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13) Feb 23 03:32:57 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:33:22 localhost podman[90043]: 2026-02-23 08:33:22.035745189 +0000 UTC m=+0.095917627 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, release=1766032510, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:33:22 localhost systemd[1]: tmp-crun.hCoZ1G.mount: Deactivated successfully. Feb 23 03:33:22 localhost podman[90044]: 2026-02-23 08:33:22.048469638 +0000 UTC m=+0.109398829 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5) Feb 23 03:33:22 localhost podman[90044]: 2026-02-23 08:33:22.065676124 +0000 UTC m=+0.126605325 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510) Feb 23 03:33:22 localhost podman[90043]: 2026-02-23 08:33:22.071697198 +0000 UTC m=+0.131869646 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, batch=17.1_20260112.1) Feb 23 03:33:22 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:33:22 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:33:22 localhost podman[90048]: 2026-02-23 08:33:22.108544066 +0000 UTC m=+0.156196351 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, release=1766032510, tcib_managed=true) Feb 23 03:33:22 localhost podman[90048]: 2026-02-23 08:33:22.129675683 +0000 UTC m=+0.177327908 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:33:22 localhost podman[90045]: 2026-02-23 08:33:22.138580365 +0000 UTC m=+0.197863595 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:33:22 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:33:22 localhost podman[90063]: 2026-02-23 08:33:22.222542864 +0000 UTC m=+0.264695610 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_id=tripleo_step4) Feb 23 03:33:22 localhost podman[90063]: 2026-02-23 08:33:22.232766137 +0000 UTC m=+0.274918933 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, version=17.1.13, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, url=https://www.redhat.com) Feb 23 03:33:22 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:33:22 localhost podman[90070]: 2026-02-23 08:33:22.195481447 +0000 UTC m=+0.240407708 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:33:22 localhost podman[90070]: 2026-02-23 08:33:22.278786875 +0000 UTC m=+0.323713156 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 23 03:33:22 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:33:22 localhost podman[90046]: 2026-02-23 08:33:22.333712996 +0000 UTC m=+0.377348708 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Feb 23 03:33:22 localhost podman[90047]: 2026-02-23 08:33:22.376194376 +0000 UTC m=+0.428261886 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T22:34:43Z, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, tcib_managed=true) Feb 23 03:33:22 localhost podman[90046]: 2026-02-23 08:33:22.381718465 +0000 UTC m=+0.425354247 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:33:22 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:33:22 localhost podman[90066]: 2026-02-23 08:33:22.46553867 +0000 UTC m=+0.501543468 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:33:22 localhost podman[90047]: 2026-02-23 08:33:22.490981939 +0000 UTC m=+0.543049449 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=iscsid, batch=17.1_20260112.1, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:33:22 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:33:22 localhost podman[90066]: 2026-02-23 08:33:22.51486347 +0000 UTC m=+0.550868278 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:33:22 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:33:22 localhost podman[90045]: 2026-02-23 08:33:22.570515062 +0000 UTC m=+0.629798292 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true) Feb 23 03:33:22 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:33:23 localhost systemd[1]: tmp-crun.Is3wIK.mount: Deactivated successfully. Feb 23 03:33:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:33:27 localhost podman[90242]: 2026-02-23 08:33:27.99603605 +0000 UTC m=+0.072750497 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git) Feb 23 03:33:28 localhost podman[90242]: 2026-02-23 08:33:28.202417636 +0000 UTC m=+0.279132033 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, batch=17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_step1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container) Feb 23 03:33:28 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:33:40 localhost sshd[90294]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:33:45 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:33:45 localhost recover_tripleo_nova_virtqemud[90297]: 62457 Feb 23 03:33:45 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:33:45 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:33:53 localhost systemd[1]: tmp-crun.RkxdLR.mount: Deactivated successfully. Feb 23 03:33:53 localhost podman[90301]: 2026-02-23 08:33:53.519615478 +0000 UTC m=+0.078142163 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true) Feb 23 03:33:53 localhost podman[90330]: 2026-02-23 08:33:53.530827103 +0000 UTC m=+0.076900135 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, name=rhosp-rhel9/openstack-cron, tcib_managed=true, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:33:53 localhost podman[90330]: 2026-02-23 08:33:53.535072354 +0000 UTC m=+0.081145416 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, distribution-scope=public, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git) Feb 23 03:33:53 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:33:53 localhost podman[90299]: 2026-02-23 08:33:53.567568092 +0000 UTC m=+0.129110510 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:33:53 localhost podman[90331]: 2026-02-23 08:33:53.606782468 +0000 UTC m=+0.155742919 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.buildah.version=1.41.5) Feb 23 03:33:53 localhost podman[90299]: 2026-02-23 08:33:53.61072943 +0000 UTC m=+0.172271848 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, version=17.1.13, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:33:53 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:33:53 localhost podman[90331]: 2026-02-23 08:33:53.629693202 +0000 UTC m=+0.178653653 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, container_name=ceilometer_agent_compute, release=1766032510, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:33:53 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:33:53 localhost podman[90301]: 2026-02-23 08:33:53.651497163 +0000 UTC m=+0.210023858 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z) Feb 23 03:33:53 localhost podman[90300]: 2026-02-23 08:33:53.611805082 +0000 UTC m=+0.172687750 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container) Feb 23 03:33:53 localhost podman[90298]: 2026-02-23 08:33:53.668827055 +0000 UTC m=+0.231802947 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:33:53 localhost podman[90298]: 2026-02-23 08:33:53.678643227 +0000 UTC m=+0.241619109 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, version=17.1.13, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:33:53 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:33:53 localhost podman[90336]: 2026-02-23 08:33:53.724754265 +0000 UTC m=+0.269682912 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible) Feb 23 03:33:53 localhost podman[90336]: 2026-02-23 08:33:53.74671733 +0000 UTC m=+0.291645957 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:33:53 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:33:53 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:33:53 localhost podman[90310]: 2026-02-23 08:33:53.820914092 +0000 UTC m=+0.378013953 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 23 03:33:53 localhost podman[90318]: 2026-02-23 08:33:53.831156336 +0000 UTC m=+0.382546352 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13) Feb 23 03:33:53 localhost podman[90318]: 2026-02-23 08:33:53.849577043 +0000 UTC m=+0.400967059 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, version=17.1.13) Feb 23 03:33:53 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:33:53 localhost podman[90310]: 2026-02-23 08:33:53.900268371 +0000 UTC m=+0.457368262 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, distribution-scope=public, container_name=iscsid, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, release=1766032510, io.openshift.expose-services=) Feb 23 03:33:53 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:33:53 localhost podman[90300]: 2026-02-23 08:33:53.94380926 +0000 UTC m=+0.504691918 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=nova_migration_target) Feb 23 03:33:53 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:33:54 localhost systemd[1]: tmp-crun.mUtH7T.mount: Deactivated successfully. Feb 23 03:34:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:34:01 localhost podman[90506]: 2026-02-23 08:34:01.076354861 +0000 UTC m=+0.052786984 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:34:01 localhost podman[90506]: 2026-02-23 08:34:01.27668492 +0000 UTC m=+0.253117033 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:34:01 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:34:11 localhost ovsdb-server[22468]: ovs|00017|reconnect|ERR|tcp:127.0.0.1:45220: no response to inactivity probe after 5.04 seconds, disconnecting Feb 23 03:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:34:24 localhost systemd[1]: tmp-crun.AE78Xw.mount: Deactivated successfully. Feb 23 03:34:24 localhost podman[90627]: 2026-02-23 08:34:24.085996275 +0000 UTC m=+0.137666944 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, release=1766032510, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:34:24 localhost podman[90651]: 2026-02-23 08:34:24.110507468 +0000 UTC m=+0.128961465 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1) Feb 23 03:34:24 localhost podman[90627]: 2026-02-23 08:34:24.116638887 +0000 UTC m=+0.168309546 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64) Feb 23 03:34:24 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:34:24 localhost podman[90616]: 2026-02-23 08:34:24.160526386 +0000 UTC m=+0.206070956 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, container_name=nova_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z) Feb 23 03:34:24 localhost podman[90616]: 2026-02-23 08:34:24.179795799 +0000 UTC m=+0.225340399 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_id=tripleo_step5) Feb 23 03:34:24 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:34:24 localhost podman[90614]: 2026-02-23 08:34:24.039996021 +0000 UTC m=+0.101908605 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, release=1766032510, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, distribution-scope=public, container_name=ovn_controller, tcib_managed=true, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:34:24 localhost podman[90640]: 2026-02-23 08:34:24.147805805 +0000 UTC m=+0.185290427 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.13, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:34:24 localhost podman[90615]: 2026-02-23 08:34:24.091884686 +0000 UTC m=+0.150035554 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, release=1766032510) Feb 23 03:34:24 localhost podman[90613]: 2026-02-23 08:34:24.249513002 +0000 UTC m=+0.313065736 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, version=17.1.13, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step3, vcs-type=git, release=1766032510) Feb 23 03:34:24 localhost podman[90613]: 2026-02-23 08:34:24.255880608 +0000 UTC m=+0.319433332 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, tcib_managed=true, build-date=2026-01-12T22:10:15Z, version=17.1.13, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:34:24 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:34:24 localhost podman[90614]: 2026-02-23 08:34:24.270156417 +0000 UTC m=+0.332069021 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:34:24 localhost podman[90640]: 2026-02-23 08:34:24.277180073 +0000 UTC m=+0.314664745 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.13, vcs-type=git, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:34:24 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:34:24 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:34:24 localhost podman[90655]: 2026-02-23 08:34:24.245379195 +0000 UTC m=+0.272434997 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Feb 23 03:34:24 localhost podman[90634]: 2026-02-23 08:34:24.06959403 +0000 UTC m=+0.103588885 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container) Feb 23 03:34:24 localhost podman[90634]: 2026-02-23 08:34:24.35578442 +0000 UTC m=+0.389779245 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:34:24 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:34:24 localhost podman[90615]: 2026-02-23 08:34:24.374824474 +0000 UTC m=+0.432975312 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:34:24 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:34:24 localhost podman[90655]: 2026-02-23 08:34:24.425480301 +0000 UTC m=+0.452536123 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, container_name=iscsid, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z) Feb 23 03:34:24 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:34:24 localhost podman[90651]: 2026-02-23 08:34:24.453913087 +0000 UTC m=+0.472367104 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, vcs-type=git, url=https://www.redhat.com, version=17.1.13, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Feb 23 03:34:24 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:34:30 localhost sshd[90818]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:34:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:34:31 localhost systemd[1]: tmp-crun.APWQcC.mount: Deactivated successfully. Feb 23 03:34:31 localhost podman[90820]: 2026-02-23 08:34:31.983476853 +0000 UTC m=+0.065904026 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, build-date=2026-01-12T22:10:14Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:34:32 localhost podman[90820]: 2026-02-23 08:34:32.185730752 +0000 UTC m=+0.268157905 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 03:34:32 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:34:55 localhost podman[90891]: 2026-02-23 08:34:55.014843547 +0000 UTC m=+0.080683092 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:34:55 localhost podman[90875]: 2026-02-23 08:34:55.06960717 +0000 UTC m=+0.142404018 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 03:34:55 localhost podman[90879]: 2026-02-23 08:34:55.078273527 +0000 UTC m=+0.138800518 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Feb 23 03:34:55 localhost podman[90879]: 2026-02-23 08:34:55.113720527 +0000 UTC m=+0.174247528 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-cron-container, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:34:55 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:34:55 localhost podman[90876]: 2026-02-23 08:34:55.125187729 +0000 UTC m=+0.196935215 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com) Feb 23 03:34:55 localhost podman[90877]: 2026-02-23 08:34:55.107101254 +0000 UTC m=+0.178703356 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, version=17.1.13, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:34:55 localhost podman[90874]: 2026-02-23 08:34:55.169237104 +0000 UTC m=+0.245480679 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, version=17.1.13, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, release=1766032510, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:34:55 localhost podman[90876]: 2026-02-23 08:34:55.185564546 +0000 UTC m=+0.257312002 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, architecture=x86_64, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, io.buildah.version=1.41.5) Feb 23 03:34:55 localhost podman[90888]: 2026-02-23 08:34:55.03902877 +0000 UTC m=+0.103441780 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:34:55 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:34:55 localhost podman[90891]: 2026-02-23 08:34:55.199375731 +0000 UTC m=+0.265215236 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z) Feb 23 03:34:55 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:34:55 localhost podman[90888]: 2026-02-23 08:34:55.220949673 +0000 UTC m=+0.285362713 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Feb 23 03:34:55 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:34:55 localhost podman[90873]: 2026-02-23 08:34:55.279660488 +0000 UTC m=+0.345812312 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, release=1766032510, config_id=tripleo_step3, container_name=collectd, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., distribution-scope=public) Feb 23 03:34:55 localhost podman[90877]: 2026-02-23 08:34:55.291321127 +0000 UTC m=+0.362923249 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, url=https://www.redhat.com, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, maintainer=OpenStack TripleO Team, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:34:55 localhost podman[90874]: 2026-02-23 08:34:55.291682339 +0000 UTC m=+0.367925914 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, batch=17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:34:55 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:34:55 localhost podman[90873]: 2026-02-23 08:34:55.312617422 +0000 UTC m=+0.378769246 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, version=17.1.13, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team) Feb 23 03:34:55 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:34:55 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:34:55 localhost podman[90875]: 2026-02-23 08:34:55.408350965 +0000 UTC m=+0.481147893 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Feb 23 03:34:55 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:34:55 localhost podman[90878]: 2026-02-23 08:34:55.423657526 +0000 UTC m=+0.491986877 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, config_id=tripleo_step5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:34:55 localhost podman[90878]: 2026-02-23 08:34:55.455741032 +0000 UTC m=+0.524070433 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, version=17.1.13, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 23 03:34:55 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:34:56 localhost systemd[1]: tmp-crun.Re0NNK.mount: Deactivated successfully. Feb 23 03:35:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:35:02 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:35:02 localhost recover_tripleo_nova_virtqemud[91082]: 62457 Feb 23 03:35:02 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:35:02 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:35:03 localhost podman[91076]: 2026-02-23 08:35:03.015127176 +0000 UTC m=+0.088512922 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, release=1766032510, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr) Feb 23 03:35:03 localhost podman[91076]: 2026-02-23 08:35:03.224888084 +0000 UTC m=+0.298273820 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1766032510, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13) Feb 23 03:35:03 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:35:18 localhost sshd[91148]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:35:26 localhost podman[91207]: 2026-02-23 08:35:26.03161217 +0000 UTC m=+0.081312581 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:35:26 localhost systemd[1]: tmp-crun.dFQylX.mount: Deactivated successfully. Feb 23 03:35:26 localhost podman[91201]: 2026-02-23 08:35:26.047478547 +0000 UTC m=+0.092616908 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1766032510, version=17.1.13, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 23 03:35:26 localhost podman[91196]: 2026-02-23 08:35:26.085587239 +0000 UTC m=+0.144207465 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, release=1766032510, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, url=https://www.redhat.com, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-type=git, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:35:26 localhost podman[91190]: 2026-02-23 08:35:26.092809342 +0000 UTC m=+0.141419920 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2026-01-12T22:34:43Z, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=iscsid, version=17.1.13, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 23 03:35:26 localhost podman[91190]: 2026-02-23 08:35:26.125282359 +0000 UTC m=+0.173892917 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13) Feb 23 03:35:26 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:35:26 localhost podman[91202]: 2026-02-23 08:35:26.133863873 +0000 UTC m=+0.186766472 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.openshift.expose-services=, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 23 03:35:26 localhost podman[91186]: 2026-02-23 08:35:26.126023652 +0000 UTC m=+0.195312275 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64) Feb 23 03:35:26 localhost podman[91188]: 2026-02-23 08:35:26.179044072 +0000 UTC m=+0.241550247 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, release=1766032510, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, version=17.1.13, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:35:26 localhost podman[91186]: 2026-02-23 08:35:26.204876687 +0000 UTC m=+0.274165310 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 23 03:35:26 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:35:26 localhost podman[91201]: 2026-02-23 08:35:26.231128404 +0000 UTC m=+0.276266785 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, architecture=x86_64, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.buildah.version=1.41.5, container_name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, batch=17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:35:26 localhost podman[91207]: 2026-02-23 08:35:26.231513565 +0000 UTC m=+0.281214026 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 23 03:35:26 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:35:26 localhost podman[91202]: 2026-02-23 08:35:26.255880395 +0000 UTC m=+0.308783064 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, tcib_managed=true) Feb 23 03:35:26 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:35:26 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:35:26 localhost podman[91187]: 2026-02-23 08:35:26.346023016 +0000 UTC m=+0.413349929 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:35:26 localhost podman[91189]: 2026-02-23 08:35:26.382762075 +0000 UTC m=+0.443462844 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, managed_by=tripleo_ansible) Feb 23 03:35:26 localhost podman[91196]: 2026-02-23 08:35:26.412461079 +0000 UTC m=+0.471081315 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, url=https://www.redhat.com, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:35:26 localhost podman[91189]: 2026-02-23 08:35:26.411772867 +0000 UTC m=+0.472473676 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true) Feb 23 03:35:26 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:35:26 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:35:26 localhost podman[91187]: 2026-02-23 08:35:26.517321882 +0000 UTC m=+0.584648785 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 23 03:35:26 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:35:26 localhost podman[91188]: 2026-02-23 08:35:26.538806143 +0000 UTC m=+0.601312288 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Feb 23 03:35:26 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:35:27 localhost systemd[1]: tmp-crun.bvXQEz.mount: Deactivated successfully. Feb 23 03:35:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:35:34 localhost podman[91396]: 2026-02-23 08:35:34.011054329 +0000 UTC m=+0.080479116 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, architecture=x86_64, config_id=tripleo_step1, io.openshift.expose-services=) Feb 23 03:35:34 localhost podman[91396]: 2026-02-23 08:35:34.228829814 +0000 UTC m=+0.298254521 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 23 03:35:34 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:35:57 localhost systemd[1]: tmp-crun.CfbNqD.mount: Deactivated successfully. Feb 23 03:35:57 localhost podman[91448]: 2026-02-23 08:35:57.075879158 +0000 UTC m=+0.149781635 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, release=1766032510, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 23 03:35:57 localhost podman[91449]: 2026-02-23 08:35:57.032708711 +0000 UTC m=+0.102702788 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.13, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:35:57 localhost podman[91478]: 2026-02-23 08:35:57.08632444 +0000 UTC m=+0.128721119 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4) Feb 23 03:35:57 localhost podman[91478]: 2026-02-23 08:35:57.094882323 +0000 UTC m=+0.137279012 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:35:57 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:35:57 localhost podman[91486]: 2026-02-23 08:35:57.050627202 +0000 UTC m=+0.086885742 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.expose-services=, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 23 03:35:57 localhost podman[91449]: 2026-02-23 08:35:57.116618511 +0000 UTC m=+0.186612598 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4) Feb 23 03:35:57 localhost podman[91486]: 2026-02-23 08:35:57.137688078 +0000 UTC m=+0.173946598 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true) Feb 23 03:35:57 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:35:57 localhost podman[91448]: 2026-02-23 08:35:57.162097339 +0000 UTC m=+0.235999806 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, batch=17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:10:15Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510) Feb 23 03:35:57 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:35:57 localhost podman[91467]: 2026-02-23 08:35:57.180116754 +0000 UTC m=+0.232587402 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 03:35:57 localhost podman[91489]: 2026-02-23 08:35:57.142448465 +0000 UTC m=+0.178464168 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true) Feb 23 03:35:57 localhost podman[91467]: 2026-02-23 08:35:57.199716146 +0000 UTC m=+0.252186794 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z) Feb 23 03:35:57 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:35:57 localhost podman[91489]: 2026-02-23 08:35:57.224761446 +0000 UTC m=+0.260777129 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, release=1766032510, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git) Feb 23 03:35:57 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:35:57 localhost podman[91450]: 2026-02-23 08:35:57.23726847 +0000 UTC m=+0.303582133 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:35:57 localhost podman[91461]: 2026-02-23 08:35:57.287407082 +0000 UTC m=+0.344679398 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, batch=17.1_20260112.1, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:35:57 localhost podman[91456]: 2026-02-23 08:35:57.338496512 +0000 UTC m=+0.399565254 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=ovn_metadata_agent, batch=17.1_20260112.1) Feb 23 03:35:57 localhost podman[91461]: 2026-02-23 08:35:57.375688566 +0000 UTC m=+0.432960902 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, container_name=iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, release=1766032510, vcs-type=git, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5) Feb 23 03:35:57 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:35:57 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:35:57 localhost podman[91456]: 2026-02-23 08:35:57.39470295 +0000 UTC m=+0.455771702 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, architecture=x86_64, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:35:57 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:35:57 localhost podman[91450]: 2026-02-23 08:35:57.594647928 +0000 UTC m=+0.660961671 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, container_name=nova_migration_target, io.openshift.expose-services=, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com) Feb 23 03:35:57 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:35:58 localhost systemd[1]: tmp-crun.GIZGeo.mount: Deactivated successfully. Feb 23 03:36:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:36:05 localhost podman[91655]: 2026-02-23 08:36:05.013750599 +0000 UTC m=+0.080904578 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Feb 23 03:36:05 localhost podman[91655]: 2026-02-23 08:36:05.203779432 +0000 UTC m=+0.270933401 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z) Feb 23 03:36:05 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:36:06 localhost sshd[91684]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:36:28 localhost systemd[1]: tmp-crun.4iUPAH.mount: Deactivated successfully. Feb 23 03:36:28 localhost podman[91794]: 2026-02-23 08:36:28.022371221 +0000 UTC m=+0.079304629 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, build-date=2026-01-12T23:07:30Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:36:28 localhost systemd[1]: tmp-crun.KSEcTG.mount: Deactivated successfully. Feb 23 03:36:28 localhost podman[91763]: 2026-02-23 08:36:28.060139623 +0000 UTC m=+0.131896116 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:36:28 localhost podman[91766]: 2026-02-23 08:36:28.083352576 +0000 UTC m=+0.143506092 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3) Feb 23 03:36:28 localhost podman[91765]: 2026-02-23 08:36:28.036718722 +0000 UTC m=+0.099128748 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 23 03:36:28 localhost podman[91794]: 2026-02-23 08:36:28.12218115 +0000 UTC m=+0.179114578 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, tcib_managed=true, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4) Feb 23 03:36:28 localhost podman[91763]: 2026-02-23 08:36:28.12411231 +0000 UTC m=+0.195868793 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13) Feb 23 03:36:28 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:36:28 localhost podman[91766]: 2026-02-23 08:36:28.139605096 +0000 UTC m=+0.199758602 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, build-date=2026-01-12T22:34:43Z, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:36:28 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:36:28 localhost podman[91762]: 2026-02-23 08:36:28.071589695 +0000 UTC m=+0.142570055 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, distribution-scope=public, container_name=collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, vendor=Red Hat, Inc.) Feb 23 03:36:28 localhost podman[91775]: 2026-02-23 08:36:28.12738994 +0000 UTC m=+0.182471450 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1766032510) Feb 23 03:36:28 localhost podman[91767]: 2026-02-23 08:36:28.185254989 +0000 UTC m=+0.246245922 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, version=17.1.13, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, release=1766032510, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public) Feb 23 03:36:28 localhost podman[91762]: 2026-02-23 08:36:28.202267912 +0000 UTC m=+0.273248282 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, version=17.1.13, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:36:28 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:36:28 localhost podman[91767]: 2026-02-23 08:36:28.226751365 +0000 UTC m=+0.287742278 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:36:28 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:36:28 localhost podman[91764]: 2026-02-23 08:36:28.240181128 +0000 UTC m=+0.311082765 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, release=1766032510, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 03:36:28 localhost podman[91775]: 2026-02-23 08:36:28.255375654 +0000 UTC m=+0.310457254 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:36:28 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:36:28 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:36:28 localhost podman[91785]: 2026-02-23 08:36:28.299734598 +0000 UTC m=+0.356222792 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510) Feb 23 03:36:28 localhost podman[91765]: 2026-02-23 08:36:28.316942747 +0000 UTC m=+0.379352793 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible) Feb 23 03:36:28 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:36:28 localhost podman[91785]: 2026-02-23 08:36:28.345826156 +0000 UTC m=+0.402314330 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13) Feb 23 03:36:28 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:36:28 localhost podman[91764]: 2026-02-23 08:36:28.587058622 +0000 UTC m=+0.657960259 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, tcib_managed=true) Feb 23 03:36:28 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:36:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:36:35 localhost podman[91968]: 2026-02-23 08:36:35.978768611 +0000 UTC m=+0.055111935 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., release=1766032510, url=https://www.redhat.com, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git) Feb 23 03:36:36 localhost podman[91968]: 2026-02-23 08:36:36.162598333 +0000 UTC m=+0.238941697 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, batch=17.1_20260112.1, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=) Feb 23 03:36:36 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:36:54 localhost sshd[91997]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:36:54 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:36:54 localhost recover_tripleo_nova_virtqemud[91999]: 62457 Feb 23 03:36:54 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:36:54 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:36:59 localhost systemd[1]: tmp-crun.L2bJ1I.mount: Deactivated successfully. Feb 23 03:36:59 localhost podman[92033]: 2026-02-23 08:36:59.038872146 +0000 UTC m=+0.077254017 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 23 03:36:59 localhost podman[92001]: 2026-02-23 08:36:59.082205749 +0000 UTC m=+0.147631120 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, maintainer=OpenStack TripleO Team) Feb 23 03:36:59 localhost podman[92001]: 2026-02-23 08:36:59.088997817 +0000 UTC m=+0.154423158 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd) Feb 23 03:36:59 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:36:59 localhost podman[92008]: 2026-02-23 08:36:59.132560126 +0000 UTC m=+0.190775046 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:36:59 localhost podman[92008]: 2026-02-23 08:36:59.141653516 +0000 UTC m=+0.199868466 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:36:59 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:36:59 localhost podman[92033]: 2026-02-23 08:36:59.163685103 +0000 UTC m=+0.202066944 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, version=17.1.13, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:36:59 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:36:59 localhost podman[92002]: 2026-02-23 08:36:59.249273574 +0000 UTC m=+0.311792026 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, release=1766032510, build-date=2026-01-12T22:36:40Z) Feb 23 03:36:59 localhost podman[92003]: 2026-02-23 08:36:59.29921916 +0000 UTC m=+0.361996250 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, distribution-scope=public, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z) Feb 23 03:36:59 localhost podman[92002]: 2026-02-23 08:36:59.350325441 +0000 UTC m=+0.412843903 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, release=1766032510, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, tcib_managed=true) Feb 23 03:36:59 localhost podman[92012]: 2026-02-23 08:36:59.360809313 +0000 UTC m=+0.406608471 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, version=17.1.13, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5) Feb 23 03:36:59 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:36:59 localhost podman[92034]: 2026-02-23 08:36:59.41470077 +0000 UTC m=+0.455437102 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:36:59 localhost podman[92026]: 2026-02-23 08:36:59.468683999 +0000 UTC m=+0.505585454 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 23 03:36:59 localhost podman[92026]: 2026-02-23 08:36:59.476719967 +0000 UTC m=+0.513621452 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4) Feb 23 03:36:59 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:36:59 localhost podman[92004]: 2026-02-23 08:36:59.515943312 +0000 UTC m=+0.573465311 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, version=17.1.13, architecture=x86_64, io.openshift.expose-services=) Feb 23 03:36:59 localhost podman[92034]: 2026-02-23 08:36:59.548366539 +0000 UTC m=+0.589102841 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:36:59 localhost podman[92012]: 2026-02-23 08:36:59.559199073 +0000 UTC m=+0.604998211 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step5, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510) Feb 23 03:36:59 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:36:59 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:36:59 localhost podman[92004]: 2026-02-23 08:36:59.573667768 +0000 UTC m=+0.631189726 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:36:59 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:36:59 localhost podman[92003]: 2026-02-23 08:36:59.677477419 +0000 UTC m=+0.740254469 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:36:59 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:37:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:37:06 localhost podman[92212]: 2026-02-23 08:37:06.983393951 +0000 UTC m=+0.061152382 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vcs-type=git, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc.) Feb 23 03:37:07 localhost podman[92212]: 2026-02-23 08:37:07.15186209 +0000 UTC m=+0.229620431 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, architecture=x86_64, managed_by=tripleo_ansible, container_name=metrics_qdr, config_id=tripleo_step1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1766032510, tcib_managed=true) Feb 23 03:37:07 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:37:30 localhost systemd[1]: tmp-crun.1oC9YB.mount: Deactivated successfully. Feb 23 03:37:30 localhost podman[92329]: 2026-02-23 08:37:30.062327616 +0000 UTC m=+0.104305158 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, batch=17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:37:30 localhost podman[92321]: 2026-02-23 08:37:30.040800035 +0000 UTC m=+0.092679061 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, release=1766032510, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.41.5, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=) Feb 23 03:37:30 localhost podman[92328]: 2026-02-23 08:37:30.102762129 +0000 UTC m=+0.142785040 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Feb 23 03:37:30 localhost podman[92329]: 2026-02-23 08:37:30.147897907 +0000 UTC m=+0.189875509 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc.) Feb 23 03:37:30 localhost podman[92328]: 2026-02-23 08:37:30.1587323 +0000 UTC m=+0.198755211 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:37:30 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:37:30 localhost podman[92317]: 2026-02-23 08:37:30.161288819 +0000 UTC m=+0.213514756 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, container_name=collectd, batch=17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:37:30 localhost podman[92317]: 2026-02-23 08:37:30.174779943 +0000 UTC m=+0.227005890 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com) Feb 23 03:37:30 localhost podman[92318]: 2026-02-23 08:37:30.204486256 +0000 UTC m=+0.261881961 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, vcs-type=git, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5) Feb 23 03:37:30 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:37:30 localhost podman[92321]: 2026-02-23 08:37:30.2215119 +0000 UTC m=+0.273390956 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1766032510) Feb 23 03:37:30 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:37:30 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:37:30 localhost podman[92318]: 2026-02-23 08:37:30.255900447 +0000 UTC m=+0.313296192 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64) Feb 23 03:37:30 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:37:30 localhost podman[92319]: 2026-02-23 08:37:30.309156354 +0000 UTC m=+0.351869538 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:37:30 localhost podman[92320]: 2026-02-23 08:37:30.361305458 +0000 UTC m=+0.412051889 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, architecture=x86_64) Feb 23 03:37:30 localhost podman[92342]: 2026-02-23 08:37:30.413496183 +0000 UTC m=+0.454335150 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 03:37:30 localhost podman[92349]: 2026-02-23 08:37:30.451692076 +0000 UTC m=+0.491743878 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 23 03:37:30 localhost podman[92320]: 2026-02-23 08:37:30.464972975 +0000 UTC m=+0.515719446 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:37:30 localhost podman[92342]: 2026-02-23 08:37:30.476558991 +0000 UTC m=+0.517398018 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.buildah.version=1.41.5, distribution-scope=public, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_compute) Feb 23 03:37:30 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:37:30 localhost podman[92349]: 2026-02-23 08:37:30.485793735 +0000 UTC m=+0.525845537 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:37:30 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:37:30 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:37:30 localhost podman[92319]: 2026-02-23 08:37:30.670752252 +0000 UTC m=+0.713465426 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:37:30 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:37:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:37:38 localhost systemd[1]: tmp-crun.VJfd8a.mount: Deactivated successfully. Feb 23 03:37:38 localhost podman[92526]: 2026-02-23 08:37:38.021701118 +0000 UTC m=+0.091486224 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, version=17.1.13) Feb 23 03:37:38 localhost podman[92526]: 2026-02-23 08:37:38.235649505 +0000 UTC m=+0.305434551 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:37:38 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:37:43 localhost sshd[92555]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:37:58 localhost sshd[92557]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:38:01 localhost systemd[1]: tmp-crun.PL9Tp2.mount: Deactivated successfully. Feb 23 03:38:01 localhost podman[92561]: 2026-02-23 08:38:01.029119912 +0000 UTC m=+0.090180663 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=ovn_metadata_agent) Feb 23 03:38:01 localhost systemd[1]: tmp-crun.I3Oiza.mount: Deactivated successfully. Feb 23 03:38:01 localhost podman[92593]: 2026-02-23 08:38:01.085870217 +0000 UTC m=+0.128061808 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, batch=17.1_20260112.1) Feb 23 03:38:01 localhost podman[92561]: 2026-02-23 08:38:01.086365343 +0000 UTC m=+0.147426114 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, architecture=x86_64, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team) Feb 23 03:38:01 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:38:01 localhost podman[92559]: 2026-02-23 08:38:01.069268897 +0000 UTC m=+0.137859249 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, vcs-type=git, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:38:01 localhost podman[92581]: 2026-02-23 08:38:01.126344051 +0000 UTC m=+0.177308981 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public) Feb 23 03:38:01 localhost podman[92581]: 2026-02-23 08:38:01.135686978 +0000 UTC m=+0.186651908 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, name=rhosp-rhel9/openstack-cron, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:38:01 localhost podman[92559]: 2026-02-23 08:38:01.152720433 +0000 UTC m=+0.221310795 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:38:01 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:38:01 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:38:01 localhost podman[92591]: 2026-02-23 08:38:01.199362256 +0000 UTC m=+0.241551067 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, version=17.1.13, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 03:38:01 localhost podman[92558]: 2026-02-23 08:38:01.178856216 +0000 UTC m=+0.250388709 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, config_id=tripleo_step3, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:38:01 localhost podman[92575]: 2026-02-23 08:38:01.232173375 +0000 UTC m=+0.285942551 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Feb 23 03:38:01 localhost podman[92575]: 2026-02-23 08:38:01.250811938 +0000 UTC m=+0.304581194 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13) Feb 23 03:38:01 localhost podman[92558]: 2026-02-23 08:38:01.257295297 +0000 UTC m=+0.328827800 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:38:01 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:38:01 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:38:01 localhost podman[92591]: 2026-02-23 08:38:01.266795349 +0000 UTC m=+0.308984260 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.13, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 03:38:01 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:38:01 localhost podman[92593]: 2026-02-23 08:38:01.319768807 +0000 UTC m=+0.361960408 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Feb 23 03:38:01 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:38:01 localhost podman[92568]: 2026-02-23 08:38:01.390877484 +0000 UTC m=+0.448246651 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2026-01-12T22:34:43Z, container_name=iscsid, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com) Feb 23 03:38:01 localhost podman[92560]: 2026-02-23 08:38:01.440418446 +0000 UTC m=+0.504062236 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target) Feb 23 03:38:01 localhost podman[92568]: 2026-02-23 08:38:01.477061453 +0000 UTC m=+0.534430540 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, version=17.1.13, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public) Feb 23 03:38:01 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:38:01 localhost podman[92560]: 2026-02-23 08:38:01.812739882 +0000 UTC m=+0.876383642 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step4, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, release=1766032510) Feb 23 03:38:01 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:38:05 localhost sshd[92763]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:38:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:38:09 localhost systemd[1]: tmp-crun.SHHEP0.mount: Deactivated successfully. Feb 23 03:38:09 localhost podman[92764]: 2026-02-23 08:38:09.00688096 +0000 UTC m=+0.082049593 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, container_name=metrics_qdr, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:38:09 localhost podman[92764]: 2026-02-23 08:38:09.193140717 +0000 UTC m=+0.268309330 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=metrics_qdr, io.buildah.version=1.41.5, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:38:09 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:38:30 localhost sshd[92901]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:38:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:38:30 localhost recover_tripleo_nova_virtqemud[92904]: 62457 Feb 23 03:38:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:38:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:38:32 localhost podman[92905]: 2026-02-23 08:38:32.042623129 +0000 UTC m=+0.105443233 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:38:32 localhost podman[92938]: 2026-02-23 08:38:32.110310099 +0000 UTC m=+0.154911543 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 23 03:38:32 localhost podman[92910]: 2026-02-23 08:38:32.154371434 +0000 UTC m=+0.210115490 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, vcs-type=git, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:38:32 localhost podman[92938]: 2026-02-23 08:38:32.16073827 +0000 UTC m=+0.205339704 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team) Feb 23 03:38:32 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:38:32 localhost podman[92910]: 2026-02-23 08:38:32.189339629 +0000 UTC m=+0.245083645 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64) Feb 23 03:38:32 localhost podman[92907]: 2026-02-23 08:38:32.199899083 +0000 UTC m=+0.262214591 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, release=1766032510, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 03:38:32 localhost podman[92908]: 2026-02-23 08:38:32.212357387 +0000 UTC m=+0.268422303 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510) Feb 23 03:38:32 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:38:32 localhost podman[92927]: 2026-02-23 08:38:32.265209232 +0000 UTC m=+0.306455633 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, vcs-type=git, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public) Feb 23 03:38:32 localhost podman[92927]: 2026-02-23 08:38:32.304196759 +0000 UTC m=+0.345443180 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true) Feb 23 03:38:32 localhost podman[92939]: 2026-02-23 08:38:32.307219293 +0000 UTC m=+0.346698779 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, version=17.1.13) Feb 23 03:38:32 localhost podman[92906]: 2026-02-23 08:38:32.317664714 +0000 UTC m=+0.381390276 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller) Feb 23 03:38:32 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:38:32 localhost podman[92908]: 2026-02-23 08:38:32.329207518 +0000 UTC m=+0.385272454 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true) Feb 23 03:38:32 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:38:32 localhost podman[92906]: 2026-02-23 08:38:32.374915944 +0000 UTC m=+0.438641496 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 23 03:38:32 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:38:32 localhost podman[92939]: 2026-02-23 08:38:32.398817848 +0000 UTC m=+0.438297374 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1766032510, version=17.1.13, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=) Feb 23 03:38:32 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:38:32 localhost podman[92926]: 2026-02-23 08:38:32.469784051 +0000 UTC m=+0.513683873 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1766032510, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, container_name=nova_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:38:32 localhost podman[92905]: 2026-02-23 08:38:32.48407806 +0000 UTC m=+0.546898154 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-collectd, release=1766032510, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:38:32 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:38:32 localhost podman[92926]: 2026-02-23 08:38:32.499846834 +0000 UTC m=+0.543746696 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1766032510, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:38:32 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:38:32 localhost podman[92907]: 2026-02-23 08:38:32.554754433 +0000 UTC m=+0.617069891 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, container_name=nova_migration_target, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 03:38:32 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:38:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:38:40 localhost podman[93127]: 2026-02-23 08:38:40.017079745 +0000 UTC m=+0.086555422 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2026-01-12T22:10:14Z) Feb 23 03:38:40 localhost podman[93127]: 2026-02-23 08:38:40.236132649 +0000 UTC m=+0.305608286 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:38:40 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:38:48 localhost sshd[93156]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5036 writes, 22K keys, 5036 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5036 writes, 634 syncs, 7.94 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:39:03 localhost systemd[1]: tmp-crun.rgMcEX.mount: Deactivated successfully. Feb 23 03:39:03 localhost podman[93158]: 2026-02-23 08:39:03.047118219 +0000 UTC m=+0.110506769 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:39:03 localhost podman[93166]: 2026-02-23 08:39:03.078932027 +0000 UTC m=+0.134851997 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, release=1766032510, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, managed_by=tripleo_ansible, container_name=iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1) Feb 23 03:39:03 localhost podman[93187]: 2026-02-23 08:39:03.060029625 +0000 UTC m=+0.105213074 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, distribution-scope=public, release=1766032510, config_id=tripleo_step4, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Feb 23 03:39:03 localhost podman[93160]: 2026-02-23 08:39:03.142989666 +0000 UTC m=+0.200111563 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:39:03 localhost podman[93157]: 2026-02-23 08:39:03.09269421 +0000 UTC m=+0.159857005 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, container_name=collectd, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64) Feb 23 03:39:03 localhost podman[93187]: 2026-02-23 08:39:03.144662197 +0000 UTC m=+0.189845616 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, container_name=ceilometer_agent_compute, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 03:39:03 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:39:03 localhost podman[93166]: 2026-02-23 08:39:03.164763546 +0000 UTC m=+0.220683506 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, distribution-scope=public, container_name=iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:39:03 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:39:03 localhost podman[93172]: 2026-02-23 08:39:03.204962451 +0000 UTC m=+0.250692748 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.13, io.openshift.expose-services=) Feb 23 03:39:03 localhost podman[93178]: 2026-02-23 08:39:03.115085078 +0000 UTC m=+0.161600579 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git) Feb 23 03:39:03 localhost podman[93158]: 2026-02-23 08:39:03.217039042 +0000 UTC m=+0.280427592 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller) Feb 23 03:39:03 localhost podman[93160]: 2026-02-23 08:39:03.222892312 +0000 UTC m=+0.280014249 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:39:03 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:39:03 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:39:03 localhost podman[93178]: 2026-02-23 08:39:03.299861829 +0000 UTC m=+0.346377300 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:39:03 localhost podman[93190]: 2026-02-23 08:39:03.308534745 +0000 UTC m=+0.348510644 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., version=17.1.13, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Feb 23 03:39:03 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:39:03 localhost podman[93157]: 2026-02-23 08:39:03.32855356 +0000 UTC m=+0.395716425 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, release=1766032510, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true) Feb 23 03:39:03 localhost podman[93190]: 2026-02-23 08:39:03.338793766 +0000 UTC m=+0.378769695 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, architecture=x86_64) Feb 23 03:39:03 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:39:03 localhost podman[93159]: 2026-02-23 08:39:03.350051712 +0000 UTC m=+0.412003887 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:39:03 localhost podman[93172]: 2026-02-23 08:39:03.378036552 +0000 UTC m=+0.423766899 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:39:03 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:39:03 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:39:03 localhost podman[93159]: 2026-02-23 08:39:03.703354582 +0000 UTC m=+0.765306838 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target) Feb 23 03:39:03 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5650 writes, 24K keys, 5650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5650 writes, 811 syncs, 6.97 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:39:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:39:11 localhost podman[93359]: 2026-02-23 08:39:11.005067656 +0000 UTC m=+0.079822794 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, tcib_managed=true) Feb 23 03:39:11 localhost podman[93359]: 2026-02-23 08:39:11.201999521 +0000 UTC m=+0.276754729 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:39:11 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:39:17 localhost sshd[93387]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:39:36 localhost podman[93451]: 2026-02-23 08:39:36.211724632 +0000 UTC m=+0.077245725 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, release=1766032510, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc.) Feb 23 03:39:36 localhost podman[93452]: 2026-02-23 08:39:36.262764061 +0000 UTC m=+0.124293992 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1766032510, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:39:36 localhost podman[93465]: 2026-02-23 08:39:36.300537422 +0000 UTC m=+0.162552928 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, release=1766032510, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 23 03:39:36 localhost podman[93465]: 2026-02-23 08:39:36.304590236 +0000 UTC m=+0.166605772 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z) Feb 23 03:39:36 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:39:36 localhost podman[93451]: 2026-02-23 08:39:36.32976125 +0000 UTC m=+0.195282343 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com) Feb 23 03:39:36 localhost podman[93477]: 2026-02-23 08:39:36.349365453 +0000 UTC m=+0.208538771 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:39:36 localhost podman[93450]: 2026-02-23 08:39:36.250393611 +0000 UTC m=+0.123303582 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=collectd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true) Feb 23 03:39:36 localhost podman[93455]: 2026-02-23 08:39:36.415270799 +0000 UTC m=+0.275004505 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20260112.1, container_name=nova_compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:39:36 localhost podman[93453]: 2026-02-23 08:39:36.448016976 +0000 UTC m=+0.316169150 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, io.openshift.expose-services=, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:39:36 localhost podman[93455]: 2026-02-23 08:39:36.46276894 +0000 UTC m=+0.322502626 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, architecture=x86_64, config_id=tripleo_step5, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, io.buildah.version=1.41.5, url=https://www.redhat.com, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:39:36 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:39:36 localhost podman[93450]: 2026-02-23 08:39:36.492368319 +0000 UTC m=+0.365278300 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, container_name=collectd, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:39:36 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Deactivated successfully. Feb 23 03:39:36 localhost podman[93477]: 2026-02-23 08:39:36.499750556 +0000 UTC m=+0.358923874 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 23 03:39:36 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:39:36 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:39:36 localhost podman[93467]: 2026-02-23 08:39:36.463594985 +0000 UTC m=+0.323512416 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, tcib_managed=true, build-date=2026-01-12T23:07:47Z, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 23 03:39:36 localhost podman[93454]: 2026-02-23 08:39:36.561899887 +0000 UTC m=+0.429974489 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=iscsid, release=1766032510, version=17.1.13, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64) Feb 23 03:39:36 localhost podman[93454]: 2026-02-23 08:39:36.572689788 +0000 UTC m=+0.440764400 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1) Feb 23 03:39:36 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:39:36 localhost podman[93453]: 2026-02-23 08:39:36.59646206 +0000 UTC m=+0.464614264 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.5, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:39:36 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:39:36 localhost podman[93452]: 2026-02-23 08:39:36.619647362 +0000 UTC m=+0.481177263 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, release=1766032510, container_name=nova_migration_target, io.openshift.expose-services=) Feb 23 03:39:36 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:39:36 localhost podman[93467]: 2026-02-23 08:39:36.647660703 +0000 UTC m=+0.507578164 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5) Feb 23 03:39:36 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:39:36 localhost podman[93723]: Feb 23 03:39:36 localhost podman[93723]: 2026-02-23 08:39:36.986142099 +0000 UTC m=+0.080939069 container create 53ef7eb4f9dd2cc4118ccddb2c1d7ca83c445e7687ac91d02a3208abcba1db8a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_beaver, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, ceph=True, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, release=1770267347) Feb 23 03:39:37 localhost systemd[1]: Started libpod-conmon-53ef7eb4f9dd2cc4118ccddb2c1d7ca83c445e7687ac91d02a3208abcba1db8a.scope. Feb 23 03:39:37 localhost systemd[1]: Started libcrun container. Feb 23 03:39:37 localhost podman[93723]: 2026-02-23 08:39:36.954224337 +0000 UTC m=+0.049021347 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 03:39:37 localhost podman[93723]: 2026-02-23 08:39:37.057405579 +0000 UTC m=+0.152202559 container init 53ef7eb4f9dd2cc4118ccddb2c1d7ca83c445e7687ac91d02a3208abcba1db8a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_beaver, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 03:39:37 localhost podman[93723]: 2026-02-23 08:39:37.065462017 +0000 UTC m=+0.160258987 container start 53ef7eb4f9dd2cc4118ccddb2c1d7ca83c445e7687ac91d02a3208abcba1db8a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_beaver, maintainer=Guillaume Abrioux , RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, version=7, release=1770267347, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.42.2, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 03:39:37 localhost podman[93723]: 2026-02-23 08:39:37.066425536 +0000 UTC m=+0.161222556 container attach 53ef7eb4f9dd2cc4118ccddb2c1d7ca83c445e7687ac91d02a3208abcba1db8a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_beaver, distribution-scope=public, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_BRANCH=main, name=rhceph) Feb 23 03:39:37 localhost elated_beaver[93738]: 167 167 Feb 23 03:39:37 localhost systemd[1]: libpod-53ef7eb4f9dd2cc4118ccddb2c1d7ca83c445e7687ac91d02a3208abcba1db8a.scope: Deactivated successfully. Feb 23 03:39:37 localhost podman[93723]: 2026-02-23 08:39:37.0710991 +0000 UTC m=+0.165896100 container died 53ef7eb4f9dd2cc4118ccddb2c1d7ca83c445e7687ac91d02a3208abcba1db8a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_beaver, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vcs-type=git, architecture=x86_64, ceph=True) Feb 23 03:39:37 localhost podman[93744]: 2026-02-23 08:39:37.165708989 +0000 UTC m=+0.077860245 container remove 53ef7eb4f9dd2cc4118ccddb2c1d7ca83c445e7687ac91d02a3208abcba1db8a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_beaver, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, ceph=True, version=7, build-date=2026-02-09T10:25:24Z, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 03:39:37 localhost systemd[1]: libpod-conmon-53ef7eb4f9dd2cc4118ccddb2c1d7ca83c445e7687ac91d02a3208abcba1db8a.scope: Deactivated successfully. Feb 23 03:39:37 localhost podman[93765]: Feb 23 03:39:37 localhost podman[93765]: 2026-02-23 08:39:37.367664047 +0000 UTC m=+0.071151169 container create 6594984e8b9512962834629f790d001bd249d8754d09918573247244af84484f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_mahavira, maintainer=Guillaume Abrioux , ceph=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main, vcs-type=git, version=7, com.redhat.component=rhceph-container) Feb 23 03:39:37 localhost systemd[1]: Started libpod-conmon-6594984e8b9512962834629f790d001bd249d8754d09918573247244af84484f.scope. Feb 23 03:39:37 localhost systemd[1]: Started libcrun container. Feb 23 03:39:37 localhost podman[93765]: 2026-02-23 08:39:37.329138153 +0000 UTC m=+0.032625325 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 03:39:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167464c919287ab4d87880f4a4269a72c5d46093dde8ed8f7ba1fa9ca20dd7a9/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 03:39:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167464c919287ab4d87880f4a4269a72c5d46093dde8ed8f7ba1fa9ca20dd7a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 03:39:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167464c919287ab4d87880f4a4269a72c5d46093dde8ed8f7ba1fa9ca20dd7a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 03:39:37 localhost podman[93765]: 2026-02-23 08:39:37.440923389 +0000 UTC m=+0.144410511 container init 6594984e8b9512962834629f790d001bd249d8754d09918573247244af84484f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_mahavira, io.buildah.version=1.42.2, release=1770267347, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7) Feb 23 03:39:37 localhost podman[93765]: 2026-02-23 08:39:37.451373131 +0000 UTC m=+0.154860233 container start 6594984e8b9512962834629f790d001bd249d8754d09918573247244af84484f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_mahavira, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , release=1770267347, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 03:39:37 localhost podman[93765]: 2026-02-23 08:39:37.451568317 +0000 UTC m=+0.155055479 container attach 6594984e8b9512962834629f790d001bd249d8754d09918573247244af84484f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_mahavira, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, release=1770267347, RELEASE=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 03:39:38 localhost elastic_mahavira[93780]: [ Feb 23 03:39:38 localhost elastic_mahavira[93780]: { Feb 23 03:39:38 localhost elastic_mahavira[93780]: "available": false, Feb 23 03:39:38 localhost elastic_mahavira[93780]: "ceph_device": false, Feb 23 03:39:38 localhost elastic_mahavira[93780]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "lsm_data": {}, Feb 23 03:39:38 localhost elastic_mahavira[93780]: "lvs": [], Feb 23 03:39:38 localhost elastic_mahavira[93780]: "path": "/dev/sr0", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "rejected_reasons": [ Feb 23 03:39:38 localhost elastic_mahavira[93780]: "Has a FileSystem", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "Insufficient space (<5GB)" Feb 23 03:39:38 localhost elastic_mahavira[93780]: ], Feb 23 03:39:38 localhost elastic_mahavira[93780]: "sys_api": { Feb 23 03:39:38 localhost elastic_mahavira[93780]: "actuators": null, Feb 23 03:39:38 localhost elastic_mahavira[93780]: "device_nodes": "sr0", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "human_readable_size": "482.00 KB", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "id_bus": "ata", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "model": "QEMU DVD-ROM", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "nr_requests": "2", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "partitions": {}, Feb 23 03:39:38 localhost elastic_mahavira[93780]: "path": "/dev/sr0", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "removable": "1", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "rev": "2.5+", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "ro": "0", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "rotational": "1", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "sas_address": "", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "sas_device_handle": "", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "scheduler_mode": "mq-deadline", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "sectors": 0, Feb 23 03:39:38 localhost elastic_mahavira[93780]: "sectorsize": "2048", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "size": 493568.0, Feb 23 03:39:38 localhost elastic_mahavira[93780]: "support_discard": "0", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "type": "disk", Feb 23 03:39:38 localhost elastic_mahavira[93780]: "vendor": "QEMU" Feb 23 03:39:38 localhost elastic_mahavira[93780]: } Feb 23 03:39:38 localhost elastic_mahavira[93780]: } Feb 23 03:39:38 localhost elastic_mahavira[93780]: ] Feb 23 03:39:38 localhost systemd[1]: libpod-6594984e8b9512962834629f790d001bd249d8754d09918573247244af84484f.scope: Deactivated successfully. Feb 23 03:39:38 localhost systemd[1]: libpod-6594984e8b9512962834629f790d001bd249d8754d09918573247244af84484f.scope: Consumed 1.034s CPU time. Feb 23 03:39:38 localhost podman[93765]: 2026-02-23 08:39:38.454643042 +0000 UTC m=+1.158130174 container died 6594984e8b9512962834629f790d001bd249d8754d09918573247244af84484f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_mahavira, CEPH_POINT_RELEASE=, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, release=1770267347, RELEASE=main, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux ) Feb 23 03:39:38 localhost systemd[1]: var-lib-containers-storage-overlay-167464c919287ab4d87880f4a4269a72c5d46093dde8ed8f7ba1fa9ca20dd7a9-merged.mount: Deactivated successfully. Feb 23 03:39:38 localhost podman[95813]: 2026-02-23 08:39:38.544132294 +0000 UTC m=+0.075292206 container remove 6594984e8b9512962834629f790d001bd249d8754d09918573247244af84484f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_mahavira, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.buildah.version=1.42.2, release=1770267347, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z) Feb 23 03:39:38 localhost systemd[1]: libpod-conmon-6594984e8b9512962834629f790d001bd249d8754d09918573247244af84484f.scope: Deactivated successfully. Feb 23 03:39:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:39:42 localhost podman[95842]: 2026-02-23 08:39:42.010472223 +0000 UTC m=+0.082095714 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.13, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z) Feb 23 03:39:42 localhost podman[95842]: 2026-02-23 08:39:42.189138056 +0000 UTC m=+0.260761617 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container) Feb 23 03:39:42 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:40:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:7f:2b:8f MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.103 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=5668 DPT=36386 SEQ=0 ACK=87912462 WINDOW=0 RES=0x00 ACK RST URGP=0 Feb 23 03:40:05 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:40:05 localhost recover_tripleo_nova_virtqemud[95872]: 62457 Feb 23 03:40:05 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:40:05 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:40:06 localhost sshd[95874]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:40:06 localhost podman[95876]: 2026-02-23 08:40:06.802849897 +0000 UTC m=+0.089008057 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.openshift.expose-services=, container_name=collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510) Feb 23 03:40:06 localhost systemd[1]: tmp-crun.zFdEy0.mount: Deactivated successfully. Feb 23 03:40:06 localhost podman[95897]: 2026-02-23 08:40:06.813457194 +0000 UTC m=+0.079926679 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13) Feb 23 03:40:06 localhost podman[95876]: 2026-02-23 08:40:06.814845816 +0000 UTC m=+0.101003976 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible) Feb 23 03:40:06 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:40:06 localhost systemd[1]: tmp-crun.De45nP.mount: Deactivated successfully. Feb 23 03:40:06 localhost podman[95886]: 2026-02-23 08:40:06.855789305 +0000 UTC m=+0.126727217 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:40:06 localhost podman[95880]: 2026-02-23 08:40:06.869527667 +0000 UTC m=+0.140278983 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:40:06 localhost podman[95897]: 2026-02-23 08:40:06.892358569 +0000 UTC m=+0.158828044 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1766032510, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond) Feb 23 03:40:06 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:40:06 localhost podman[95905]: 2026-02-23 08:40:06.902362867 +0000 UTC m=+0.167232932 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:40:06 localhost podman[95880]: 2026-02-23 08:40:06.905653668 +0000 UTC m=+0.176404994 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:40:06 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:40:06 localhost podman[95905]: 2026-02-23 08:40:06.945658397 +0000 UTC m=+0.210528432 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Feb 23 03:40:06 localhost podman[95878]: 2026-02-23 08:40:06.952969223 +0000 UTC m=+0.233483050 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 23 03:40:06 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:40:07 localhost podman[95901]: 2026-02-23 08:40:07.009768358 +0000 UTC m=+0.278698398 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container) Feb 23 03:40:07 localhost podman[95901]: 2026-02-23 08:40:07.028742392 +0000 UTC m=+0.297672452 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, distribution-scope=public, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 03:40:07 localhost podman[95886]: 2026-02-23 08:40:07.040030269 +0000 UTC m=+0.310968151 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, container_name=iscsid) Feb 23 03:40:07 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:40:07 localhost podman[95892]: 2026-02-23 08:40:07.06609801 +0000 UTC m=+0.331293305 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5) Feb 23 03:40:07 localhost podman[95877]: 2026-02-23 08:40:07.111573858 +0000 UTC m=+0.394036254 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, distribution-scope=public, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Feb 23 03:40:07 localhost podman[95892]: 2026-02-23 08:40:07.146047208 +0000 UTC m=+0.411242503 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=nova_compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git) Feb 23 03:40:07 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:40:07 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:40:07 localhost podman[95877]: 2026-02-23 08:40:07.198372686 +0000 UTC m=+0.480835122 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_controller, vcs-type=git, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 23 03:40:07 localhost podman[95877]: unhealthy Feb 23 03:40:07 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:40:07 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:40:07 localhost podman[95878]: 2026-02-23 08:40:07.277826689 +0000 UTC m=+0.558340516 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.buildah.version=1.41.5, version=17.1.13, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:40:07 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:40:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:40:12 localhost podman[96080]: 2026-02-23 08:40:12.996363955 +0000 UTC m=+0.075612635 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, release=1766032510, version=17.1.13) Feb 23 03:40:13 localhost podman[96080]: 2026-02-23 08:40:13.163368639 +0000 UTC m=+0.242617309 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, release=1766032510, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 03:40:13 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:40:36 localhost podman[96110]: 2026-02-23 08:40:36.991922321 +0000 UTC m=+0.070137957 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-cron-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:40:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:40:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:40:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:40:37 localhost podman[96110]: 2026-02-23 08:40:37.072721394 +0000 UTC m=+0.150936950 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team) Feb 23 03:40:37 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:40:37 localhost systemd[1]: tmp-crun.OzmGA2.mount: Deactivated successfully. Feb 23 03:40:37 localhost podman[96135]: 2026-02-23 08:40:37.094845754 +0000 UTC m=+0.078688030 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, container_name=ovn_metadata_agent) Feb 23 03:40:37 localhost podman[96109]: 2026-02-23 08:40:37.051697708 +0000 UTC m=+0.130077109 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 23 03:40:37 localhost podman[96135]: 2026-02-23 08:40:37.131754549 +0000 UTC m=+0.115596765 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 23 03:40:37 localhost podman[96109]: 2026-02-23 08:40:37.131916354 +0000 UTC m=+0.210295765 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, container_name=collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true) Feb 23 03:40:37 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Deactivated successfully. Feb 23 03:40:37 localhost podman[96161]: 2026-02-23 08:40:37.140999393 +0000 UTC m=+0.068176656 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:40:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:40:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:40:37 localhost podman[96136]: 2026-02-23 08:40:37.190621139 +0000 UTC m=+0.169400549 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, release=1766032510, version=17.1.13) Feb 23 03:40:37 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:40:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:40:37 localhost podman[96208]: 2026-02-23 08:40:37.236065516 +0000 UTC m=+0.063724570 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, container_name=iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 23 03:40:37 localhost podman[96208]: 2026-02-23 08:40:37.246693093 +0000 UTC m=+0.074352137 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid) Feb 23 03:40:37 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:40:37 localhost podman[96246]: 2026-02-23 08:40:37.337880975 +0000 UTC m=+0.111504879 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, vendor=Red Hat, Inc., container_name=ovn_controller, batch=17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:40:37 localhost podman[96246]: 2026-02-23 08:40:37.352105643 +0000 UTC m=+0.125729517 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:40:37 localhost podman[96246]: unhealthy Feb 23 03:40:37 localhost podman[96161]: 2026-02-23 08:40:37.36437765 +0000 UTC m=+0.291554983 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible) Feb 23 03:40:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:40:37 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:40:37 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:40:37 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:40:37 localhost podman[96136]: 2026-02-23 08:40:37.415709548 +0000 UTC m=+0.394489038 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-type=git) Feb 23 03:40:37 localhost podman[96209]: 2026-02-23 08:40:37.433745982 +0000 UTC m=+0.258293101 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, maintainer=OpenStack TripleO Team, release=1766032510) Feb 23 03:40:37 localhost podman[96209]: 2026-02-23 08:40:37.45871751 +0000 UTC m=+0.283264679 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, architecture=x86_64, config_id=tripleo_step5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, version=17.1.13, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Feb 23 03:40:37 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:40:37 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:40:37 localhost podman[96280]: 2026-02-23 08:40:37.548690626 +0000 UTC m=+0.173604398 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:40:37 localhost podman[96280]: 2026-02-23 08:40:37.935814997 +0000 UTC m=+0.560728769 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, container_name=nova_migration_target, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:40:38 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:40:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:40:44 localhost podman[96446]: 2026-02-23 08:40:43.999908986 +0000 UTC m=+0.072299334 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:40:44 localhost podman[96446]: 2026-02-23 08:40:44.196713105 +0000 UTC m=+0.269103373 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, batch=17.1_20260112.1) Feb 23 03:40:44 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:40:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:7f:2b:8f MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.103 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=5668 DPT=45044 SEQ=0 ACK=1054100263 WINDOW=0 RES=0x00 ACK RST URGP=0 Feb 23 03:40:56 localhost sshd[96474]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:41:08 localhost systemd[1]: tmp-crun.KJGEAa.mount: Deactivated successfully. Feb 23 03:41:08 localhost systemd[1]: tmp-crun.l2RiaM.mount: Deactivated successfully. Feb 23 03:41:08 localhost podman[96477]: 2026-02-23 08:41:08.034511003 +0000 UTC m=+0.099094138 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, vcs-type=git, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:41:08 localhost podman[96497]: 2026-02-23 08:41:08.092103763 +0000 UTC m=+0.139398466 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, vcs-type=git, build-date=2026-01-12T22:10:15Z) Feb 23 03:41:08 localhost podman[96477]: 2026-02-23 08:41:08.118728022 +0000 UTC m=+0.183311157 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc.) Feb 23 03:41:08 localhost podman[96477]: unhealthy Feb 23 03:41:08 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:41:08 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:41:08 localhost podman[96497]: 2026-02-23 08:41:08.177413686 +0000 UTC m=+0.224708369 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, url=https://www.redhat.com, version=17.1.13, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:41:08 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:41:08 localhost podman[96490]: 2026-02-23 08:41:08.191787308 +0000 UTC m=+0.235796300 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1766032510, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:41:08 localhost podman[96502]: 2026-02-23 08:41:08.056486129 +0000 UTC m=+0.094274970 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:41:08 localhost podman[96507]: 2026-02-23 08:41:08.16162499 +0000 UTC m=+0.198938347 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com) Feb 23 03:41:08 localhost podman[96490]: 2026-02-23 08:41:08.215832866 +0000 UTC m=+0.259841888 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:41:08 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:41:08 localhost podman[96478]: 2026-02-23 08:41:08.084861631 +0000 UTC m=+0.145956278 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z) Feb 23 03:41:08 localhost podman[96476]: 2026-02-23 08:41:08.136674603 +0000 UTC m=+0.205560550 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13) Feb 23 03:41:08 localhost podman[96507]: 2026-02-23 08:41:08.245810088 +0000 UTC m=+0.283123445 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi) Feb 23 03:41:08 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:41:08 localhost podman[96502]: 2026-02-23 08:41:08.2881725 +0000 UTC m=+0.325961301 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=) Feb 23 03:41:08 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:41:08 localhost podman[96478]: 2026-02-23 08:41:08.31579533 +0000 UTC m=+0.376889987 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_metadata_agent, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:41:08 localhost podman[96478]: unhealthy Feb 23 03:41:08 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:41:08 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:41:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:41:08 localhost podman[96476]: 2026-02-23 08:41:08.371718619 +0000 UTC m=+0.440604586 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z) Feb 23 03:41:08 localhost podman[96484]: 2026-02-23 08:41:08.289187192 +0000 UTC m=+0.346994098 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 23 03:41:08 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:41:08 localhost podman[96484]: 2026-02-23 08:41:08.429797144 +0000 UTC m=+0.487604030 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=iscsid, distribution-scope=public, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Feb 23 03:41:08 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:41:08 localhost podman[96647]: 2026-02-23 08:41:08.441094282 +0000 UTC m=+0.094560038 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, architecture=x86_64, version=17.1.13, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:41:08 localhost podman[96647]: 2026-02-23 08:41:08.834556707 +0000 UTC m=+0.488022563 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5) Feb 23 03:41:08 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:41:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:41:15 localhost podman[96673]: 2026-02-23 08:41:15.004700626 +0000 UTC m=+0.077905767 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20260112.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, distribution-scope=public) Feb 23 03:41:15 localhost podman[96673]: 2026-02-23 08:41:15.202079693 +0000 UTC m=+0.275284834 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1766032510, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:41:15 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:41:35 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:41:35 localhost recover_tripleo_nova_virtqemud[96703]: 62457 Feb 23 03:41:35 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:41:35 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:41:39 localhost podman[96707]: 2026-02-23 08:41:39.04803264 +0000 UTC m=+0.103927125 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:41:39 localhost podman[96732]: 2026-02-23 08:41:39.101073581 +0000 UTC m=+0.142398878 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, distribution-scope=public, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:41:39 localhost podman[96732]: 2026-02-23 08:41:39.106687754 +0000 UTC m=+0.148013061 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, url=https://www.redhat.com, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5) Feb 23 03:41:39 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:41:39 localhost podman[96704]: 2026-02-23 08:41:39.085483192 +0000 UTC m=+0.151612901 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, container_name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, tcib_managed=true, version=17.1.13) Feb 23 03:41:39 localhost podman[96719]: 2026-02-23 08:41:39.148799068 +0000 UTC m=+0.197651036 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, container_name=iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 23 03:41:39 localhost podman[96707]: 2026-02-23 08:41:39.165245514 +0000 UTC m=+0.221140009 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:41:39 localhost podman[96707]: unhealthy Feb 23 03:41:39 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:41:39 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:41:39 localhost podman[96725]: 2026-02-23 08:41:39.211059883 +0000 UTC m=+0.248889343 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13) Feb 23 03:41:39 localhost podman[96704]: 2026-02-23 08:41:39.219553854 +0000 UTC m=+0.285683593 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.13, container_name=collectd, release=1766032510, architecture=x86_64, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:41:39 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:41:39 localhost podman[96748]: 2026-02-23 08:41:39.260635816 +0000 UTC m=+0.288919862 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:41:39 localhost podman[96725]: 2026-02-23 08:41:39.270089887 +0000 UTC m=+0.307919317 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, config_id=tripleo_step5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:41:39 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:41:39 localhost podman[96706]: 2026-02-23 08:41:39.311628025 +0000 UTC m=+0.369217842 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true) Feb 23 03:41:39 localhost podman[96741]: 2026-02-23 08:41:39.37393462 +0000 UTC m=+0.409294053 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:41:39 localhost podman[96748]: 2026-02-23 08:41:39.385884137 +0000 UTC m=+0.414168213 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:41:39 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:41:39 localhost podman[96741]: 2026-02-23 08:41:39.406834801 +0000 UTC m=+0.442194224 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:41:39 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:41:39 localhost podman[96719]: 2026-02-23 08:41:39.437993549 +0000 UTC m=+0.486845517 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Feb 23 03:41:39 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:41:39 localhost podman[96705]: 2026-02-23 08:41:39.13875464 +0000 UTC m=+0.204453346 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:41:39 localhost podman[96705]: 2026-02-23 08:41:39.524771927 +0000 UTC m=+0.590470653 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:41:39 localhost podman[96705]: unhealthy Feb 23 03:41:39 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:41:39 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:41:39 localhost podman[96706]: 2026-02-23 08:41:39.699883339 +0000 UTC m=+0.757473136 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4) Feb 23 03:41:39 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:41:43 localhost sshd[96964]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:41:45 localhost sshd[96981]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:41:46 localhost podman[96983]: 2026-02-23 08:41:46.006762471 +0000 UTC m=+0.080589789 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-type=git, tcib_managed=true, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 23 03:41:46 localhost podman[96983]: 2026-02-23 08:41:46.223952107 +0000 UTC m=+0.297779365 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git) Feb 23 03:41:46 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:42:01 localhost sshd[97012]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:42:10 localhost podman[97017]: 2026-02-23 08:42:10.031058418 +0000 UTC m=+0.093137405 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, release=1766032510, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 23 03:42:10 localhost podman[97015]: 2026-02-23 08:42:10.043271275 +0000 UTC m=+0.109705385 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, io.buildah.version=1.41.5, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1) Feb 23 03:42:10 localhost podman[97015]: 2026-02-23 08:42:10.053662223 +0000 UTC m=+0.120096363 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:42:10 localhost podman[97015]: unhealthy Feb 23 03:42:10 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:42:10 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:42:10 localhost podman[97017]: 2026-02-23 08:42:10.093670394 +0000 UTC m=+0.155749401 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 23 03:42:10 localhost podman[97017]: unhealthy Feb 23 03:42:10 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:42:10 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:42:10 localhost podman[97038]: 2026-02-23 08:42:10.100041169 +0000 UTC m=+0.152064976 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, release=1766032510, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:42:10 localhost podman[97046]: 2026-02-23 08:42:10.15339737 +0000 UTC m=+0.195730119 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.5, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 03:42:10 localhost podman[97014]: 2026-02-23 08:42:10.203980305 +0000 UTC m=+0.272135178 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-collectd-container, tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd) Feb 23 03:42:10 localhost podman[97046]: 2026-02-23 08:42:10.20483358 +0000 UTC m=+0.247166389 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, vcs-type=git, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:42:10 localhost podman[97038]: 2026-02-23 08:42:10.23699622 +0000 UTC m=+0.289020037 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step5, tcib_managed=true, release=1766032510, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 23 03:42:10 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:42:10 localhost podman[97026]: 2026-02-23 08:42:10.255954683 +0000 UTC m=+0.312375506 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, distribution-scope=public, version=17.1.13, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 23 03:42:10 localhost podman[97026]: 2026-02-23 08:42:10.264744743 +0000 UTC m=+0.321165596 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, io.buildah.version=1.41.5, tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, version=17.1.13, url=https://www.redhat.com, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510) Feb 23 03:42:10 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:42:10 localhost podman[97014]: 2026-02-23 08:42:10.29133432 +0000 UTC m=+0.359489193 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5) Feb 23 03:42:10 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:42:10 localhost podman[97040]: 2026-02-23 08:42:10.307836837 +0000 UTC m=+0.346974848 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 23 03:42:10 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:42:10 localhost podman[97016]: 2026-02-23 08:42:10.133958602 +0000 UTC m=+0.197238055 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Feb 23 03:42:10 localhost podman[97053]: 2026-02-23 08:42:10.354738769 +0000 UTC m=+0.395885261 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4) Feb 23 03:42:10 localhost podman[97053]: 2026-02-23 08:42:10.384813414 +0000 UTC m=+0.425959896 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 23 03:42:10 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:42:10 localhost podman[97040]: 2026-02-23 08:42:10.441858157 +0000 UTC m=+0.480996148 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 23 03:42:10 localhost podman[97016]: 2026-02-23 08:42:10.449842763 +0000 UTC m=+0.513122256 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.buildah.version=1.41.5, container_name=nova_migration_target, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible) Feb 23 03:42:10 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:42:10 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:42:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:42:17 localhost podman[97212]: 2026-02-23 08:42:17.019397951 +0000 UTC m=+0.088054967 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, container_name=metrics_qdr, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z) Feb 23 03:42:17 localhost podman[97212]: 2026-02-23 08:42:17.223718812 +0000 UTC m=+0.292375858 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:42:17 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:42:31 localhost sshd[97241]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:42:41 localhost systemd[1]: tmp-crun.sf0t2d.mount: Deactivated successfully. Feb 23 03:42:41 localhost podman[97264]: 2026-02-23 08:42:41.034675113 +0000 UTC m=+0.072676745 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, version=17.1.13, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:42:41 localhost systemd[1]: tmp-crun.qJoaEc.mount: Deactivated successfully. Feb 23 03:42:41 localhost podman[97264]: 2026-02-23 08:42:41.084465323 +0000 UTC m=+0.122466985 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 23 03:42:41 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:42:41 localhost podman[97245]: 2026-02-23 08:42:41.133707017 +0000 UTC m=+0.181824350 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, container_name=ovn_controller, config_id=tripleo_step4, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:42:41 localhost podman[97249]: 2026-02-23 08:42:41.088879139 +0000 UTC m=+0.129724318 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, version=17.1.13, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:42:41 localhost podman[97245]: 2026-02-23 08:42:41.141747815 +0000 UTC m=+0.189865168 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5) Feb 23 03:42:41 localhost podman[97245]: unhealthy Feb 23 03:42:41 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:42:41 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:42:41 localhost podman[97250]: 2026-02-23 08:42:41.055352519 +0000 UTC m=+0.093305209 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:42:41 localhost podman[97244]: 2026-02-23 08:42:41.140718392 +0000 UTC m=+0.188328540 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, version=17.1.13, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com) Feb 23 03:42:41 localhost podman[97248]: 2026-02-23 08:42:41.201890963 +0000 UTC m=+0.239996228 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3) Feb 23 03:42:41 localhost podman[97249]: 2026-02-23 08:42:41.217309578 +0000 UTC m=+0.258154737 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:42:41 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:42:41 localhost podman[97250]: 2026-02-23 08:42:41.23463271 +0000 UTC m=+0.272585410 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, config_id=tripleo_step4, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, tcib_managed=true, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:42:41 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:42:41 localhost podman[97244]: 2026-02-23 08:42:41.270231795 +0000 UTC m=+0.317841933 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, container_name=collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:42:41 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:42:41 localhost podman[97248]: 2026-02-23 08:42:41.282246864 +0000 UTC m=+0.320352119 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_id=tripleo_step3, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid) Feb 23 03:42:41 localhost podman[97246]: 2026-02-23 08:42:41.30457309 +0000 UTC m=+0.348390641 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, build-date=2026-01-12T23:32:04Z, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, managed_by=tripleo_ansible) Feb 23 03:42:41 localhost podman[97247]: 2026-02-23 08:42:41.072049242 +0000 UTC m=+0.117395740 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 23 03:42:41 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:42:41 localhost podman[97266]: 2026-02-23 08:42:41.397920879 +0000 UTC m=+0.435129917 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.13, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 23 03:42:41 localhost podman[97247]: 2026-02-23 08:42:41.403543622 +0000 UTC m=+0.448890150 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:42:41 localhost podman[97247]: unhealthy Feb 23 03:42:41 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:42:41 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:42:41 localhost podman[97266]: 2026-02-23 08:42:41.438370233 +0000 UTC m=+0.475579281 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team) Feb 23 03:42:41 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:42:41 localhost podman[97246]: 2026-02-23 08:42:41.646926834 +0000 UTC m=+0.690744395 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:42:41 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:42:42 localhost sshd[97440]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:42:44 localhost podman[97545]: 2026-02-23 08:42:44.6199959 +0000 UTC m=+0.086738097 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, name=rhceph, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 03:42:44 localhost podman[97545]: 2026-02-23 08:42:44.717214459 +0000 UTC m=+0.183956676 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.tags=rhceph ceph, release=1770267347, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, RELEASE=main, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 03:42:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:42:48 localhost podman[97686]: 2026-02-23 08:42:48.010592413 +0000 UTC m=+0.082814538 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible) Feb 23 03:42:48 localhost podman[97686]: 2026-02-23 08:42:48.222395333 +0000 UTC m=+0.294617438 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, tcib_managed=true, config_id=tripleo_step1, vcs-type=git, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, architecture=x86_64, container_name=metrics_qdr, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:42:48 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:43:05 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:43:05 localhost recover_tripleo_nova_virtqemud[97717]: 62457 Feb 23 03:43:05 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:43:05 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:43:12 localhost systemd[1]: tmp-crun.nFYzDJ.mount: Deactivated successfully. Feb 23 03:43:12 localhost podman[97729]: 2026-02-23 08:43:12.029089244 +0000 UTC m=+0.090957017 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, container_name=nova_compute, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, architecture=x86_64, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git) Feb 23 03:43:12 localhost podman[97721]: 2026-02-23 08:43:12.040191445 +0000 UTC m=+0.101563833 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, build-date=2026-01-12T22:56:19Z, tcib_managed=true, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git) Feb 23 03:43:12 localhost podman[97729]: 2026-02-23 08:43:12.079600327 +0000 UTC m=+0.141468130 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, batch=17.1_20260112.1, tcib_managed=true) Feb 23 03:43:12 localhost podman[97757]: 2026-02-23 08:43:12.089681687 +0000 UTC m=+0.133489995 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, maintainer=OpenStack TripleO Team, version=17.1.13, io.buildah.version=1.41.5, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:43:12 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:43:12 localhost podman[97721]: 2026-02-23 08:43:12.124079594 +0000 UTC m=+0.185451992 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64) Feb 23 03:43:12 localhost podman[97721]: unhealthy Feb 23 03:43:12 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:43:12 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:43:12 localhost podman[97718]: 2026-02-23 08:43:12.133329929 +0000 UTC m=+0.203324272 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, architecture=x86_64, vcs-type=git, version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:43:12 localhost podman[97735]: 2026-02-23 08:43:12.084800796 +0000 UTC m=+0.141449819 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.13, name=rhosp-rhel9/openstack-cron) Feb 23 03:43:12 localhost podman[97718]: 2026-02-23 08:43:12.140798328 +0000 UTC m=+0.210792681 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd) Feb 23 03:43:12 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:43:12 localhost podman[97736]: 2026-02-23 08:43:12.19617551 +0000 UTC m=+0.246684014 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, version=17.1.13, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:43:12 localhost podman[97722]: 2026-02-23 08:43:12.234809758 +0000 UTC m=+0.298454156 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true) Feb 23 03:43:12 localhost podman[97722]: 2026-02-23 08:43:12.247837179 +0000 UTC m=+0.311481567 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:43:12 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:43:12 localhost podman[97719]: 2026-02-23 08:43:12.289810359 +0000 UTC m=+0.359086180 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller) Feb 23 03:43:12 localhost podman[97719]: 2026-02-23 08:43:12.30578532 +0000 UTC m=+0.375061151 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, tcib_managed=true, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, container_name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 23 03:43:12 localhost podman[97719]: unhealthy Feb 23 03:43:12 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:43:12 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:43:12 localhost podman[97720]: 2026-02-23 08:43:12.344707296 +0000 UTC m=+0.400772031 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:43:12 localhost podman[97757]: 2026-02-23 08:43:12.365031371 +0000 UTC m=+0.408839719 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi) Feb 23 03:43:12 localhost podman[97735]: 2026-02-23 08:43:12.365375852 +0000 UTC m=+0.422024845 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 23 03:43:12 localhost podman[97736]: 2026-02-23 08:43:12.36988129 +0000 UTC m=+0.420389794 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:43:12 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:43:12 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:43:12 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:43:12 localhost podman[97720]: 2026-02-23 08:43:12.654895862 +0000 UTC m=+0.710960607 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, version=17.1.13, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:43:12 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:43:13 localhost systemd[1]: tmp-crun.9auK14.mount: Deactivated successfully. Feb 23 03:43:17 localhost sshd[97912]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:43:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:43:18 localhost systemd[1]: tmp-crun.mBEnvQ.mount: Deactivated successfully. Feb 23 03:43:18 localhost podman[97914]: 2026-02-23 08:43:18.997966256 +0000 UTC m=+0.074840972 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:43:19 localhost podman[97914]: 2026-02-23 08:43:19.216667429 +0000 UTC m=+0.293542095 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, container_name=metrics_qdr, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:43:19 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:43:43 localhost podman[97945]: 2026-02-23 08:43:43.081805375 +0000 UTC m=+0.143030628 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.buildah.version=1.41.5, tcib_managed=true, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, batch=17.1_20260112.1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:43:43 localhost podman[97946]: 2026-02-23 08:43:43.038370421 +0000 UTC m=+0.099155411 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:43:43 localhost podman[97944]: 2026-02-23 08:43:43.095750524 +0000 UTC m=+0.160489875 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, container_name=ovn_controller, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13) Feb 23 03:43:43 localhost podman[97944]: 2026-02-23 08:43:43.10114403 +0000 UTC m=+0.165883411 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:43:43 localhost podman[97944]: unhealthy Feb 23 03:43:43 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:43:43 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:43:43 localhost podman[97952]: 2026-02-23 08:43:43.138822629 +0000 UTC m=+0.184781202 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step3, version=17.1.13, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:43:43 localhost podman[97952]: 2026-02-23 08:43:43.148717022 +0000 UTC m=+0.194675625 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510) Feb 23 03:43:43 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:43:43 localhost podman[97959]: 2026-02-23 08:43:43.056987712 +0000 UTC m=+0.106393252 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, vcs-type=git) Feb 23 03:43:43 localhost podman[97976]: 2026-02-23 08:43:43.201527876 +0000 UTC m=+0.237940106 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5) Feb 23 03:43:43 localhost podman[97943]: 2026-02-23 08:43:43.185835953 +0000 UTC m=+0.252636167 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, release=1766032510, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:43:43 localhost podman[97959]: 2026-02-23 08:43:43.239644668 +0000 UTC m=+0.289050168 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, version=17.1.13, architecture=x86_64, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:43:43 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:43:43 localhost podman[97971]: 2026-02-23 08:43:43.248803359 +0000 UTC m=+0.297171806 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, version=17.1.13, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, container_name=logrotate_crond) Feb 23 03:43:43 localhost podman[97971]: 2026-02-23 08:43:43.257798055 +0000 UTC m=+0.306166542 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 23 03:43:43 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:43:43 localhost podman[97943]: 2026-02-23 08:43:43.269730262 +0000 UTC m=+0.336530456 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 23 03:43:43 localhost podman[97976]: 2026-02-23 08:43:43.276764079 +0000 UTC m=+0.313176299 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1766032510, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64) Feb 23 03:43:43 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:43:43 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:43:43 localhost podman[97946]: 2026-02-23 08:43:43.328320103 +0000 UTC m=+0.389105163 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, batch=17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:43:43 localhost podman[97946]: unhealthy Feb 23 03:43:43 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:43:43 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:43:43 localhost podman[97975]: 2026-02-23 08:43:43.413660257 +0000 UTC m=+0.457978920 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, io.openshift.expose-services=, distribution-scope=public, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, architecture=x86_64) Feb 23 03:43:43 localhost podman[97945]: 2026-02-23 08:43:43.444633119 +0000 UTC m=+0.505858382 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=nova_migration_target, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z) Feb 23 03:43:43 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:43:43 localhost podman[97975]: 2026-02-23 08:43:43.495339378 +0000 UTC m=+0.539658101 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.) Feb 23 03:43:43 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:43:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:43:50 localhost podman[98220]: 2026-02-23 08:43:50.006901211 +0000 UTC m=+0.082330371 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=) Feb 23 03:43:50 localhost podman[98220]: 2026-02-23 08:43:50.193766766 +0000 UTC m=+0.269195866 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, url=https://www.redhat.com, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 03:43:50 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:44:04 localhost sshd[98250]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:44:14 localhost podman[98262]: 2026-02-23 08:44:14.030546992 +0000 UTC m=+0.090419380 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, batch=17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:44:14 localhost podman[98292]: 2026-02-23 08:44:14.048725301 +0000 UTC m=+0.092647529 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 23 03:44:14 localhost podman[98262]: 2026-02-23 08:44:14.055326574 +0000 UTC m=+0.115198942 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, container_name=nova_compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:44:14 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:44:14 localhost podman[98255]: 2026-02-23 08:44:14.090448893 +0000 UTC m=+0.154410117 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, tcib_managed=true, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible) Feb 23 03:44:14 localhost podman[98292]: 2026-02-23 08:44:14.098995046 +0000 UTC m=+0.142917304 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, distribution-scope=public, architecture=x86_64, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:44:14 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:44:14 localhost podman[98264]: 2026-02-23 08:44:14.143844795 +0000 UTC m=+0.197210163 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 23 03:44:14 localhost podman[98255]: 2026-02-23 08:44:14.153851483 +0000 UTC m=+0.217812697 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, version=17.1.13) Feb 23 03:44:14 localhost podman[98255]: unhealthy Feb 23 03:44:14 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:44:14 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:44:14 localhost podman[98252]: 2026-02-23 08:44:14.190911692 +0000 UTC m=+0.263409979 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:44:14 localhost podman[98253]: 2026-02-23 08:44:14.201501747 +0000 UTC m=+0.268881156 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-type=git, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1) Feb 23 03:44:14 localhost podman[98264]: 2026-02-23 08:44:14.224783543 +0000 UTC m=+0.278148901 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, container_name=logrotate_crond, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.expose-services=, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:44:14 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:44:14 localhost podman[98280]: 2026-02-23 08:44:14.234854033 +0000 UTC m=+0.286268671 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64) Feb 23 03:44:14 localhost podman[98252]: 2026-02-23 08:44:14.254098985 +0000 UTC m=+0.326597272 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:44:14 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:44:14 localhost podman[98280]: 2026-02-23 08:44:14.281687692 +0000 UTC m=+0.333102380 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.5, version=17.1.13, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Feb 23 03:44:14 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:44:14 localhost podman[98254]: 2026-02-23 08:44:14.292468263 +0000 UTC m=+0.350502695 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:44:14 localhost podman[98261]: 2026-02-23 08:44:14.297836488 +0000 UTC m=+0.359033407 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.13, container_name=iscsid, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z) Feb 23 03:44:14 localhost podman[98261]: 2026-02-23 08:44:14.308819376 +0000 UTC m=+0.370016295 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, container_name=iscsid, maintainer=OpenStack TripleO Team) Feb 23 03:44:14 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:44:14 localhost podman[98253]: 2026-02-23 08:44:14.365154658 +0000 UTC m=+0.432534047 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:44:14 localhost podman[98253]: unhealthy Feb 23 03:44:14 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:44:14 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:44:14 localhost podman[98254]: 2026-02-23 08:44:14.658935819 +0000 UTC m=+0.716970281 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:44:14 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:44:15 localhost systemd[1]: tmp-crun.RiZvoe.mount: Deactivated successfully. Feb 23 03:44:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:44:20 localhost podman[98451]: 2026-02-23 08:44:20.989747017 +0000 UTC m=+0.066631240 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, release=1766032510, io.openshift.expose-services=, container_name=metrics_qdr, batch=17.1_20260112.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:44:21 localhost podman[98451]: 2026-02-23 08:44:21.186918628 +0000 UTC m=+0.263802861 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T22:10:14Z) Feb 23 03:44:21 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:44:44 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:44:44 localhost recover_tripleo_nova_virtqemud[98552]: 62457 Feb 23 03:44:45 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:44:45 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:44:45 localhost podman[98511]: 2026-02-23 08:44:45.095480498 +0000 UTC m=+0.132314948 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, vcs-type=git, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, build-date=2026-01-12T23:07:47Z) Feb 23 03:44:45 localhost podman[98488]: 2026-02-23 08:44:45.053802277 +0000 UTC m=+0.112513100 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5) Feb 23 03:44:45 localhost podman[98511]: 2026-02-23 08:44:45.114770251 +0000 UTC m=+0.151604761 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute) Feb 23 03:44:45 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:44:45 localhost podman[98503]: 2026-02-23 08:44:45.073489512 +0000 UTC m=+0.126614043 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, tcib_managed=true, config_id=tripleo_step4) Feb 23 03:44:45 localhost podman[98488]: 2026-02-23 08:44:45.13782849 +0000 UTC m=+0.196539343 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, version=17.1.13, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 23 03:44:45 localhost podman[98488]: unhealthy Feb 23 03:44:45 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:44:45 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:44:45 localhost podman[98495]: 2026-02-23 08:44:45.196758452 +0000 UTC m=+0.249236053 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, version=17.1.13, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5) Feb 23 03:44:45 localhost podman[98480]: 2026-02-23 08:44:45.244664174 +0000 UTC m=+0.312113285 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, name=rhosp-rhel9/openstack-collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, distribution-scope=public) Feb 23 03:44:45 localhost podman[98480]: 2026-02-23 08:44:45.254483207 +0000 UTC m=+0.321932378 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, config_id=tripleo_step3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., container_name=collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-collectd-container) Feb 23 03:44:45 localhost podman[98503]: 2026-02-23 08:44:45.26404951 +0000 UTC m=+0.317174051 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, container_name=logrotate_crond, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron) Feb 23 03:44:45 localhost podman[98495]: 2026-02-23 08:44:45.277256817 +0000 UTC m=+0.329734498 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Feb 23 03:44:45 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:44:45 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:44:45 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:44:45 localhost podman[98481]: 2026-02-23 08:44:45.305144074 +0000 UTC m=+0.370717888 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:44:45 localhost podman[98481]: 2026-02-23 08:44:45.321818906 +0000 UTC m=+0.387392780 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, container_name=ovn_controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:44:45 localhost podman[98481]: unhealthy Feb 23 03:44:45 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:44:45 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:44:45 localhost podman[98522]: 2026-02-23 08:44:45.337916711 +0000 UTC m=+0.376524696 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:44:45 localhost podman[98482]: 2026-02-23 08:44:45.305318319 +0000 UTC m=+0.364750804 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.13, architecture=x86_64, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:44:45 localhost podman[98522]: 2026-02-23 08:44:45.367911903 +0000 UTC m=+0.406519948 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=) Feb 23 03:44:45 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:44:45 localhost podman[98490]: 2026-02-23 08:44:45.451046909 +0000 UTC m=+0.509094481 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, container_name=iscsid, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:44:45 localhost podman[98490]: 2026-02-23 08:44:45.458798058 +0000 UTC m=+0.516845700 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, release=1766032510, vcs-type=git, config_id=tripleo_step3, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 23 03:44:45 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:44:45 localhost podman[98482]: 2026-02-23 08:44:45.66673962 +0000 UTC m=+0.726172065 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com) Feb 23 03:44:45 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:44:46 localhost systemd[1]: tmp-crun.tGA9ML.mount: Deactivated successfully. Feb 23 03:44:49 localhost sshd[98739]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:44:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:44:52 localhost systemd[1]: tmp-crun.k1fGVE.mount: Deactivated successfully. Feb 23 03:44:52 localhost podman[98756]: 2026-02-23 08:44:52.011800905 +0000 UTC m=+0.084136277 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Feb 23 03:44:52 localhost podman[98756]: 2026-02-23 08:44:52.229852058 +0000 UTC m=+0.302187470 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 23 03:44:52 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:45:07 localhost sshd[98788]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:45:16 localhost podman[98809]: 2026-02-23 08:45:16.032967589 +0000 UTC m=+0.088558204 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, version=17.1.13) Feb 23 03:45:16 localhost podman[98809]: 2026-02-23 08:45:16.05871377 +0000 UTC m=+0.114304355 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, version=17.1.13, vcs-type=git, config_id=tripleo_step5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510) Feb 23 03:45:16 localhost podman[98810]: 2026-02-23 08:45:16.136578164 +0000 UTC m=+0.180367976 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, distribution-scope=public) Feb 23 03:45:16 localhost podman[98810]: 2026-02-23 08:45:16.148274664 +0000 UTC m=+0.192064516 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, distribution-scope=public, url=https://www.redhat.com, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 23 03:45:16 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:45:16 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:45:16 localhost podman[98789]: 2026-02-23 08:45:16.070861593 +0000 UTC m=+0.144969147 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, release=1766032510) Feb 23 03:45:16 localhost podman[98829]: 2026-02-23 08:45:16.238905729 +0000 UTC m=+0.285303842 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5) Feb 23 03:45:16 localhost podman[98797]: 2026-02-23 08:45:16.100164725 +0000 UTC m=+0.156461891 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:45:16 localhost podman[98789]: 2026-02-23 08:45:16.257950385 +0000 UTC m=+0.332058009 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:45:16 localhost podman[98797]: 2026-02-23 08:45:16.283778849 +0000 UTC m=+0.340076105 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git) Feb 23 03:45:16 localhost podman[98797]: unhealthy Feb 23 03:45:16 localhost podman[98798]: 2026-02-23 08:45:16.294456107 +0000 UTC m=+0.348526186 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.buildah.version=1.41.5, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:45:16 localhost podman[98829]: 2026-02-23 08:45:16.295928622 +0000 UTC m=+0.342326735 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1766032510, version=17.1.13, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 23 03:45:16 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:45:16 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:45:16 localhost podman[98798]: 2026-02-23 08:45:16.330590408 +0000 UTC m=+0.384660477 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, batch=17.1_20260112.1, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:45:16 localhost podman[98790]: 2026-02-23 08:45:16.341875115 +0000 UTC m=+0.407349904 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1766032510, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Feb 23 03:45:16 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:45:16 localhost podman[98790]: 2026-02-23 08:45:16.356774033 +0000 UTC m=+0.422248762 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z) Feb 23 03:45:16 localhost podman[98790]: unhealthy Feb 23 03:45:16 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:45:16 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:45:16 localhost podman[98794]: 2026-02-23 08:45:16.343877086 +0000 UTC m=+0.407604601 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4) Feb 23 03:45:16 localhost podman[98818]: 2026-02-23 08:45:16.054581243 +0000 UTC m=+0.095190358 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.13, vcs-type=git, container_name=ceilometer_agent_compute, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:45:16 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:45:16 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:45:16 localhost podman[98818]: 2026-02-23 08:45:16.499225132 +0000 UTC m=+0.539834267 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Feb 23 03:45:16 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:45:16 localhost podman[98794]: 2026-02-23 08:45:16.722823616 +0000 UTC m=+0.786551141 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64) Feb 23 03:45:16 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:45:17 localhost systemd[1]: tmp-crun.Gr6xCl.mount: Deactivated successfully. Feb 23 03:45:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:45:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:45:22 localhost recover_tripleo_nova_virtqemud[98996]: 62457 Feb 23 03:45:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:45:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:45:22 localhost podman[98989]: 2026-02-23 08:45:22.988418928 +0000 UTC m=+0.070297263 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 23 03:45:23 localhost podman[98989]: 2026-02-23 08:45:23.182852056 +0000 UTC m=+0.264730391 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.5) Feb 23 03:45:23 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:45:39 localhost sshd[99021]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:45:47 localhost systemd[1]: tmp-crun.9wAkB5.mount: Deactivated successfully. Feb 23 03:45:47 localhost podman[99036]: 2026-02-23 08:45:47.034090247 +0000 UTC m=+0.088196533 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, distribution-scope=public, version=17.1.13, build-date=2026-01-12T22:56:19Z, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4) Feb 23 03:45:47 localhost podman[99069]: 2026-02-23 08:45:47.097860157 +0000 UTC m=+0.135769775 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc.) Feb 23 03:45:47 localhost podman[99063]: 2026-02-23 08:45:47.077565303 +0000 UTC m=+0.121521727 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:45:47 localhost podman[99053]: 2026-02-23 08:45:47.138668841 +0000 UTC m=+0.182808650 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:45:47 localhost podman[99063]: 2026-02-23 08:45:47.160618446 +0000 UTC m=+0.204574810 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:45:47 localhost podman[99040]: 2026-02-23 08:45:47.063565832 +0000 UTC m=+0.119608567 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-type=git, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:45:47 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:45:47 localhost podman[99046]: 2026-02-23 08:45:47.1676001 +0000 UTC m=+0.219706344 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, version=17.1.13, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:45:47 localhost podman[99023]: 2026-02-23 08:45:47.022412448 +0000 UTC m=+0.092802254 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, container_name=collectd, tcib_managed=true, release=1766032510, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:45:47 localhost podman[99036]: 2026-02-23 08:45:47.180757255 +0000 UTC m=+0.234863531 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 23 03:45:47 localhost podman[99036]: unhealthy Feb 23 03:45:47 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:45:47 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:45:47 localhost podman[99046]: 2026-02-23 08:45:47.189165754 +0000 UTC m=+0.241272008 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, version=17.1.13, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:45:47 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:45:47 localhost podman[99040]: 2026-02-23 08:45:47.196665064 +0000 UTC m=+0.252707789 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, batch=17.1_20260112.1) Feb 23 03:45:47 localhost podman[99023]: 2026-02-23 08:45:47.205685891 +0000 UTC m=+0.276075717 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team) Feb 23 03:45:47 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:45:47 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:45:47 localhost podman[99069]: 2026-02-23 08:45:47.247517267 +0000 UTC m=+0.285426895 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 23 03:45:47 localhost podman[99053]: 2026-02-23 08:45:47.247910609 +0000 UTC m=+0.292050388 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, io.openshift.expose-services=) Feb 23 03:45:47 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:45:47 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:45:47 localhost podman[99029]: 2026-02-23 08:45:47.386731957 +0000 UTC m=+0.448035664 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:45:47 localhost podman[99024]: 2026-02-23 08:45:47.421590929 +0000 UTC m=+0.490106778 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:45:47 localhost podman[99024]: 2026-02-23 08:45:47.444205114 +0000 UTC m=+0.512721003 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=ovn_controller, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:45:47 localhost podman[99024]: unhealthy Feb 23 03:45:47 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:45:47 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:45:47 localhost podman[99029]: 2026-02-23 08:45:47.760793666 +0000 UTC m=+0.822097323 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, container_name=nova_migration_target, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z) Feb 23 03:45:47 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:45:51 localhost sshd[99279]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:45:54 localhost podman[99296]: 2026-02-23 08:45:54.025862624 +0000 UTC m=+0.096384465 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public) Feb 23 03:45:54 localhost podman[99296]: 2026-02-23 08:45:54.204885707 +0000 UTC m=+0.275407568 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, tcib_managed=true, architecture=x86_64, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5) Feb 23 03:45:54 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:46:18 localhost systemd[1]: tmp-crun.qt35yI.mount: Deactivated successfully. Feb 23 03:46:18 localhost podman[99348]: 2026-02-23 08:46:18.04448628 +0000 UTC m=+0.094202166 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, release=1766032510, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, url=https://www.redhat.com) Feb 23 03:46:18 localhost podman[99334]: 2026-02-23 08:46:18.044581103 +0000 UTC m=+0.097606521 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 23 03:46:18 localhost podman[99348]: 2026-02-23 08:46:18.105382992 +0000 UTC m=+0.155098888 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public) Feb 23 03:46:18 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:46:18 localhost podman[99328]: 2026-02-23 08:46:18.07633617 +0000 UTC m=+0.138438788 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:46:18 localhost podman[99340]: 2026-02-23 08:46:18.096997454 +0000 UTC m=+0.145148853 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, io.buildah.version=1.41.5, container_name=iscsid, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:46:18 localhost podman[99349]: 2026-02-23 08:46:18.158583718 +0000 UTC m=+0.198747521 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, build-date=2026-01-12T23:07:47Z, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:46:18 localhost podman[99356]: 2026-02-23 08:46:18.140201273 +0000 UTC m=+0.184629747 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510) Feb 23 03:46:18 localhost podman[99327]: 2026-02-23 08:46:18.194891864 +0000 UTC m=+0.261023265 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, container_name=ovn_controller, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 23 03:46:18 localhost podman[99327]: 2026-02-23 08:46:18.206652165 +0000 UTC m=+0.272783566 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Feb 23 03:46:18 localhost podman[99327]: unhealthy Feb 23 03:46:18 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:46:18 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:46:18 localhost podman[99349]: 2026-02-23 08:46:18.2155635 +0000 UTC m=+0.255727303 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z) Feb 23 03:46:18 localhost podman[99356]: 2026-02-23 08:46:18.223764881 +0000 UTC m=+0.268193335 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, release=1766032510, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:46:18 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:46:18 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:46:18 localhost podman[99347]: 2026-02-23 08:46:18.249574425 +0000 UTC m=+0.299857239 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, container_name=nova_compute, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:46:18 localhost podman[99326]: 2026-02-23 08:46:18.209167843 +0000 UTC m=+0.276100869 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step3, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Feb 23 03:46:18 localhost podman[99347]: 2026-02-23 08:46:18.265656309 +0000 UTC m=+0.315939133 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, release=1766032510, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, distribution-scope=public) Feb 23 03:46:18 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:46:18 localhost podman[99334]: 2026-02-23 08:46:18.282490767 +0000 UTC m=+0.335516155 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=) Feb 23 03:46:18 localhost podman[99334]: unhealthy Feb 23 03:46:18 localhost podman[99326]: 2026-02-23 08:46:18.29169275 +0000 UTC m=+0.358625746 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, distribution-scope=public, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:46:18 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:46:18 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:46:18 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:46:18 localhost podman[99340]: 2026-02-23 08:46:18.334132595 +0000 UTC m=+0.382283944 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:46:18 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:46:18 localhost podman[99328]: 2026-02-23 08:46:18.393783138 +0000 UTC m=+0.455885756 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., version=17.1.13, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:46:18 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:46:20 localhost kernel: DROPPING: IN=vlan20 OUT= MACSRC=1a:e1:bf:c8:40:06 MACDST=76:31:9b:68:66:c0 MACPROTO=0800 SRC=172.17.0.105 DST=172.17.0.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6642 DPT=55490 SEQ=0 ACK=1359834780 WINDOW=0 RES=0x00 ACK RST URGP=0 Feb 23 03:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:46:24 localhost systemd[1]: tmp-crun.lACrCB.mount: Deactivated successfully. Feb 23 03:46:25 localhost podman[99521]: 2026-02-23 08:46:25.002780948 +0000 UTC m=+0.078564576 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, architecture=x86_64, release=1766032510, tcib_managed=true, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:46:25 localhost podman[99521]: 2026-02-23 08:46:25.166715918 +0000 UTC m=+0.242499556 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, release=1766032510, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team) Feb 23 03:46:25 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:46:28 localhost sshd[99550]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:46:33 localhost systemd[1]: session-28.scope: Deactivated successfully. Feb 23 03:46:33 localhost systemd[1]: session-28.scope: Consumed 6min 54.995s CPU time. Feb 23 03:46:33 localhost systemd-logind[759]: Session 28 logged out. Waiting for processes to exit. Feb 23 03:46:33 localhost systemd-logind[759]: Removed session 28. Feb 23 03:46:43 localhost systemd[1]: Stopping User Manager for UID 1003... Feb 23 03:46:43 localhost systemd[36097]: Activating special unit Exit the Session... Feb 23 03:46:43 localhost systemd[36097]: Removed slice User Background Tasks Slice. Feb 23 03:46:43 localhost systemd[36097]: Stopped target Main User Target. Feb 23 03:46:43 localhost systemd[36097]: Stopped target Basic System. Feb 23 03:46:43 localhost systemd[36097]: Stopped target Paths. Feb 23 03:46:43 localhost systemd[36097]: Stopped target Sockets. Feb 23 03:46:43 localhost systemd[36097]: Stopped target Timers. Feb 23 03:46:43 localhost systemd[36097]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 23 03:46:43 localhost systemd[36097]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 03:46:43 localhost systemd[36097]: Closed D-Bus User Message Bus Socket. Feb 23 03:46:43 localhost systemd[36097]: Stopped Create User's Volatile Files and Directories. Feb 23 03:46:43 localhost systemd[36097]: Removed slice User Application Slice. Feb 23 03:46:43 localhost systemd[36097]: Reached target Shutdown. Feb 23 03:46:43 localhost systemd[36097]: Finished Exit the Session. Feb 23 03:46:43 localhost systemd[36097]: Reached target Exit the Session. Feb 23 03:46:43 localhost systemd[1]: user@1003.service: Deactivated successfully. Feb 23 03:46:43 localhost systemd[1]: Stopped User Manager for UID 1003. Feb 23 03:46:43 localhost systemd[1]: user@1003.service: Consumed 2.923s CPU time, read 0B from disk, written 7.0K to disk. Feb 23 03:46:43 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Feb 23 03:46:43 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Feb 23 03:46:43 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Feb 23 03:46:43 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Feb 23 03:46:43 localhost systemd[1]: Removed slice User Slice of UID 1003. Feb 23 03:46:43 localhost systemd[1]: user-1003.slice: Consumed 6min 57.951s CPU time. Feb 23 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:46:49 localhost systemd[1]: tmp-crun.9pn95M.mount: Deactivated successfully. Feb 23 03:46:49 localhost systemd[1]: tmp-crun.SCMSYn.mount: Deactivated successfully. Feb 23 03:46:49 localhost podman[99554]: 2026-02-23 08:46:49.038184928 +0000 UTC m=+0.106583867 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_controller, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, distribution-scope=public, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:46:49 localhost podman[99567]: 2026-02-23 08:46:49.08852134 +0000 UTC m=+0.138737818 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, tcib_managed=true, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, distribution-scope=public) Feb 23 03:46:49 localhost podman[99598]: 2026-02-23 08:46:49.116698319 +0000 UTC m=+0.151483012 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, batch=17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, version=17.1.13) Feb 23 03:46:49 localhost podman[99567]: 2026-02-23 08:46:49.121796616 +0000 UTC m=+0.172013074 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:46:49 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:46:49 localhost podman[99598]: 2026-02-23 08:46:49.161985705 +0000 UTC m=+0.196770378 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git) Feb 23 03:46:49 localhost podman[99573]: 2026-02-23 08:46:49.018094849 +0000 UTC m=+0.070909667 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Feb 23 03:46:49 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:46:49 localhost podman[99573]: 2026-02-23 08:46:49.201103821 +0000 UTC m=+0.253918739 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:46:49 localhost podman[99594]: 2026-02-23 08:46:49.206187128 +0000 UTC m=+0.241137946 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, vcs-type=git, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Feb 23 03:46:49 localhost podman[99583]: 2026-02-23 08:46:49.21046173 +0000 UTC m=+0.253230589 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13) Feb 23 03:46:49 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:46:49 localhost podman[99583]: 2026-02-23 08:46:49.219622293 +0000 UTC m=+0.262391172 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:46:49 localhost podman[99553]: 2026-02-23 08:46:49.067529724 +0000 UTC m=+0.137393158 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, container_name=collectd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:46:49 localhost podman[99566]: 2026-02-23 08:46:49.24582699 +0000 UTC m=+0.303256531 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Feb 23 03:46:49 localhost podman[99566]: 2026-02-23 08:46:49.257656665 +0000 UTC m=+0.315086216 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, tcib_managed=true, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com) Feb 23 03:46:49 localhost podman[99566]: unhealthy Feb 23 03:46:49 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:46:49 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:46:49 localhost podman[99594]: 2026-02-23 08:46:49.279821438 +0000 UTC m=+0.314772276 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 03:46:49 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:46:49 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:46:49 localhost podman[99553]: 2026-02-23 08:46:49.297204025 +0000 UTC m=+0.367067489 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, release=1766032510, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:46:49 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:46:49 localhost podman[99554]: 2026-02-23 08:46:49.32171826 +0000 UTC m=+0.390117259 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, version=17.1.13, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z) Feb 23 03:46:49 localhost podman[99554]: unhealthy Feb 23 03:46:49 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:46:49 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:46:49 localhost podman[99555]: 2026-02-23 08:46:49.412251532 +0000 UTC m=+0.473446448 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 23 03:46:49 localhost podman[99555]: 2026-02-23 08:46:49.78194317 +0000 UTC m=+0.843138076 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, release=1766032510, vcs-type=git, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64) Feb 23 03:46:49 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:46:56 localhost podman[99825]: 2026-02-23 08:46:56.007294861 +0000 UTC m=+0.082911168 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510) Feb 23 03:46:56 localhost podman[99825]: 2026-02-23 08:46:56.208847076 +0000 UTC m=+0.284463343 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 23 03:46:56 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:47:05 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:47:05 localhost recover_tripleo_nova_virtqemud[99856]: 62457 Feb 23 03:47:05 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:47:05 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:47:18 localhost sshd[99857]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:47:20 localhost systemd[1]: tmp-crun.cwA4ji.mount: Deactivated successfully. Feb 23 03:47:20 localhost podman[99880]: 2026-02-23 08:47:20.109624516 +0000 UTC m=+0.153844684 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.5, config_id=tripleo_step5) Feb 23 03:47:20 localhost podman[99891]: 2026-02-23 08:47:20.064445272 +0000 UTC m=+0.107319229 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step4, io.buildah.version=1.41.5, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Feb 23 03:47:20 localhost podman[99862]: 2026-02-23 08:47:20.042110824 +0000 UTC m=+0.101912693 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, architecture=x86_64) Feb 23 03:47:20 localhost podman[99891]: 2026-02-23 08:47:20.147765531 +0000 UTC m=+0.190639508 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.buildah.version=1.41.5) Feb 23 03:47:20 localhost podman[99885]: 2026-02-23 08:47:20.153270051 +0000 UTC m=+0.203076682 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.5, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 23 03:47:20 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:47:20 localhost podman[99885]: 2026-02-23 08:47:20.165692915 +0000 UTC m=+0.215499526 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, architecture=x86_64, container_name=logrotate_crond, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., url=https://www.redhat.com) Feb 23 03:47:20 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:47:20 localhost podman[99859]: 2026-02-23 08:47:20.086515933 +0000 UTC m=+0.154668850 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510) Feb 23 03:47:20 localhost podman[99873]: 2026-02-23 08:47:20.212051573 +0000 UTC m=+0.259652226 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, container_name=iscsid, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com) Feb 23 03:47:20 localhost podman[99861]: 2026-02-23 08:47:20.266497443 +0000 UTC m=+0.330477771 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target) Feb 23 03:47:20 localhost podman[99880]: 2026-02-23 08:47:20.288861282 +0000 UTC m=+0.333081510 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1) Feb 23 03:47:20 localhost podman[99873]: 2026-02-23 08:47:20.295634621 +0000 UTC m=+0.343235244 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, url=https://www.redhat.com, architecture=x86_64, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:47:20 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:47:20 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:47:20 localhost podman[99887]: 2026-02-23 08:47:20.311403957 +0000 UTC m=+0.354291815 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:47:20 localhost podman[99862]: 2026-02-23 08:47:20.329294258 +0000 UTC m=+0.389096177 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ovn_metadata_agent, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:47:20 localhost podman[99862]: unhealthy Feb 23 03:47:20 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:47:20 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:47:20 localhost podman[99887]: 2026-02-23 08:47:20.370888011 +0000 UTC m=+0.413775899 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-type=git, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 23 03:47:20 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:47:20 localhost podman[99860]: 2026-02-23 08:47:20.416531298 +0000 UTC m=+0.482669572 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.expose-services=) Feb 23 03:47:20 localhost podman[99859]: 2026-02-23 08:47:20.421203102 +0000 UTC m=+0.489356079 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, tcib_managed=true, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:47:20 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:47:20 localhost podman[99860]: 2026-02-23 08:47:20.463571559 +0000 UTC m=+0.529709823 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller) Feb 23 03:47:20 localhost podman[99860]: unhealthy Feb 23 03:47:20 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:47:20 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:47:20 localhost podman[99861]: 2026-02-23 08:47:20.660876341 +0000 UTC m=+0.724856679 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Feb 23 03:47:20 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:47:21 localhost systemd[1]: tmp-crun.X9bxKu.mount: Deactivated successfully. Feb 23 03:47:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:47:27 localhost podman[100050]: 2026-02-23 08:47:27.00840013 +0000 UTC m=+0.080412241 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vcs-type=git, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, release=1766032510, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:47:27 localhost podman[100050]: 2026-02-23 08:47:27.188909316 +0000 UTC m=+0.260921487 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.13, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 23 03:47:27 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:47:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:47:51 localhost podman[100116]: 2026-02-23 08:47:51.092452583 +0000 UTC m=+0.082819644 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, distribution-scope=public, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13) Feb 23 03:47:51 localhost podman[100116]: 2026-02-23 08:47:51.118717114 +0000 UTC m=+0.109084195 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:47:51 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:47:51 localhost podman[100100]: 2026-02-23 08:47:51.137120191 +0000 UTC m=+0.141013249 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container) Feb 23 03:47:51 localhost podman[100100]: 2026-02-23 08:47:51.156855419 +0000 UTC m=+0.160748427 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, tcib_managed=true, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:47:51 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:47:51 localhost podman[100080]: 2026-02-23 08:47:51.071349523 +0000 UTC m=+0.092592796 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, container_name=collectd, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:47:51 localhost podman[100080]: 2026-02-23 08:47:51.204945831 +0000 UTC m=+0.226189155 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, version=17.1.13, release=1766032510, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd) Feb 23 03:47:51 localhost systemd[1]: tmp-crun.gyxo9g.mount: Deactivated successfully. Feb 23 03:47:51 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:47:51 localhost podman[100107]: 2026-02-23 08:47:51.222125112 +0000 UTC m=+0.212774361 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:47:51 localhost podman[100081]: 2026-02-23 08:47:51.25224297 +0000 UTC m=+0.265819947 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T22:36:40Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13) Feb 23 03:47:51 localhost podman[100107]: 2026-02-23 08:47:51.257784021 +0000 UTC m=+0.248433270 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:47:51 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:47:51 localhost podman[100081]: 2026-02-23 08:47:51.295103162 +0000 UTC m=+0.308680109 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.buildah.version=1.41.5, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:47:51 localhost podman[100081]: unhealthy Feb 23 03:47:51 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:47:51 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:47:51 localhost podman[100120]: 2026-02-23 08:47:51.335698353 +0000 UTC m=+0.316160069 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:47:51 localhost podman[100087]: 2026-02-23 08:47:51.298037843 +0000 UTC m=+0.307404200 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, build-date=2026-01-12T23:32:04Z, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:47:51 localhost podman[100120]: 2026-02-23 08:47:51.389708908 +0000 UTC m=+0.370170694 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20260112.1, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, distribution-scope=public, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 23 03:47:51 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:47:51 localhost podman[100088]: 2026-02-23 08:47:51.460473241 +0000 UTC m=+0.456902268 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T22:56:19Z, vcs-type=git, container_name=ovn_metadata_agent, architecture=x86_64, tcib_managed=true) Feb 23 03:47:51 localhost podman[100091]: 2026-02-23 08:47:51.463928057 +0000 UTC m=+0.459322103 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, build-date=2026-01-12T22:34:43Z, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Feb 23 03:47:51 localhost podman[100091]: 2026-02-23 08:47:51.473977827 +0000 UTC m=+0.469371893 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, tcib_managed=true, url=https://www.redhat.com, version=17.1.13) Feb 23 03:47:51 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:47:51 localhost podman[100088]: 2026-02-23 08:47:51.49776888 +0000 UTC m=+0.494197847 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, url=https://www.redhat.com, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1) Feb 23 03:47:51 localhost podman[100088]: unhealthy Feb 23 03:47:51 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:47:51 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:47:51 localhost podman[100087]: 2026-02-23 08:47:51.68392807 +0000 UTC m=+0.693294427 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:47:51 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:47:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:47:58 localhost podman[100349]: 2026-02-23 08:47:58.011194443 +0000 UTC m=+0.081635688 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, distribution-scope=public, vcs-type=git, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:47:58 localhost podman[100349]: 2026-02-23 08:47:58.191984828 +0000 UTC m=+0.262426133 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step1, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13) Feb 23 03:47:58 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:48:05 localhost sshd[100379]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:48:22 localhost podman[100382]: 2026-02-23 08:48:22.166781415 +0000 UTC m=+0.229255360 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, url=https://www.redhat.com, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510) Feb 23 03:48:22 localhost podman[100384]: 2026-02-23 08:48:22.069789014 +0000 UTC m=+0.128592146 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4) Feb 23 03:48:22 localhost podman[100383]: 2026-02-23 08:48:22.091689089 +0000 UTC m=+0.150334176 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, architecture=x86_64, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4) Feb 23 03:48:22 localhost podman[100384]: 2026-02-23 08:48:22.202808906 +0000 UTC m=+0.261612028 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:48:22 localhost podman[100384]: unhealthy Feb 23 03:48:22 localhost podman[100381]: 2026-02-23 08:48:22.209504232 +0000 UTC m=+0.274331090 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=collectd, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:48:22 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:48:22 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:48:22 localhost podman[100381]: 2026-02-23 08:48:22.220762549 +0000 UTC m=+0.285589457 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, version=17.1.13, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1766032510) Feb 23 03:48:22 localhost podman[100394]: 2026-02-23 08:48:22.2593882 +0000 UTC m=+0.312327501 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, version=17.1.13, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Feb 23 03:48:22 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:48:22 localhost podman[100398]: 2026-02-23 08:48:22.322787314 +0000 UTC m=+0.367444210 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team) Feb 23 03:48:22 localhost podman[100408]: 2026-02-23 08:48:22.134980914 +0000 UTC m=+0.177949827 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:48:22 localhost podman[100394]: 2026-02-23 08:48:22.338921282 +0000 UTC m=+0.391860513 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, tcib_managed=true, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, container_name=nova_compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:48:22 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:48:22 localhost podman[100398]: 2026-02-23 08:48:22.359783886 +0000 UTC m=+0.404440792 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=logrotate_crond, distribution-scope=public, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:48:22 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:48:22 localhost podman[100382]: 2026-02-23 08:48:22.387471409 +0000 UTC m=+0.449945404 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step4, container_name=ovn_controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 23 03:48:22 localhost podman[100415]: 2026-02-23 08:48:22.406092653 +0000 UTC m=+0.442452203 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.13, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:48:22 localhost podman[100408]: 2026-02-23 08:48:22.414776501 +0000 UTC m=+0.457745384 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, architecture=x86_64, version=17.1.13, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, release=1766032510, distribution-scope=public) Feb 23 03:48:22 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:48:22 localhost podman[100415]: 2026-02-23 08:48:22.435743577 +0000 UTC m=+0.472103137 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, version=17.1.13, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 23 03:48:22 localhost podman[100382]: unhealthy Feb 23 03:48:22 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:48:22 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:48:22 localhost podman[100383]: 2026-02-23 08:48:22.454742343 +0000 UTC m=+0.513387430 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:48:22 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:48:22 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:48:22 localhost podman[100385]: 2026-02-23 08:48:22.414464751 +0000 UTC m=+0.454895756 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 23 03:48:22 localhost podman[100385]: 2026-02-23 08:48:22.675917703 +0000 UTC m=+0.716348758 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:48:22 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:48:23 localhost systemd[1]: tmp-crun.chzGkV.mount: Deactivated successfully. Feb 23 03:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:48:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:48:28 localhost recover_tripleo_nova_virtqemud[100587]: 62457 Feb 23 03:48:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:48:28 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:48:28 localhost podman[100581]: 2026-02-23 08:48:28.986893783 +0000 UTC m=+0.062362584 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step1, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 23 03:48:29 localhost podman[100581]: 2026-02-23 08:48:29.179815321 +0000 UTC m=+0.255284132 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:48:29 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:48:42 localhost sshd[100613]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:48:52 localhost sshd[100615]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:48:53 localhost podman[100618]: 2026-02-23 08:48:53.042454548 +0000 UTC m=+0.106969209 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, distribution-scope=public, architecture=x86_64, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_id=tripleo_step4, container_name=ovn_controller, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1) Feb 23 03:48:53 localhost podman[100618]: 2026-02-23 08:48:53.054561602 +0000 UTC m=+0.119076293 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, url=https://www.redhat.com, distribution-scope=public, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, build-date=2026-01-12T22:36:40Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:48:53 localhost podman[100618]: unhealthy Feb 23 03:48:53 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:48:53 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:48:53 localhost podman[100642]: 2026-02-23 08:48:53.092969345 +0000 UTC m=+0.135420556 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:48:53 localhost podman[100620]: 2026-02-23 08:48:53.102044876 +0000 UTC m=+0.145310262 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, tcib_managed=true, version=17.1.13, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1766032510) Feb 23 03:48:53 localhost podman[100642]: 2026-02-23 08:48:53.114780508 +0000 UTC m=+0.157231679 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:48:53 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:48:53 localhost podman[100620]: 2026-02-23 08:48:53.14174233 +0000 UTC m=+0.185007726 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent) Feb 23 03:48:53 localhost podman[100620]: unhealthy Feb 23 03:48:53 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:48:53 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:48:53 localhost podman[100617]: 2026-02-23 08:48:53.15182566 +0000 UTC m=+0.221898503 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd) Feb 23 03:48:53 localhost podman[100617]: 2026-02-23 08:48:53.162647334 +0000 UTC m=+0.232720197 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Feb 23 03:48:53 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:48:53 localhost podman[100619]: 2026-02-23 08:48:53.20793969 +0000 UTC m=+0.264135055 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.13, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:48:53 localhost podman[100644]: 2026-02-23 08:48:53.251947877 +0000 UTC m=+0.282327826 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 23 03:48:53 localhost podman[100644]: 2026-02-23 08:48:53.274775051 +0000 UTC m=+0.305154990 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:48:53 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:48:53 localhost podman[100638]: 2026-02-23 08:48:53.353835348 +0000 UTC m=+0.404278555 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, architecture=x86_64, distribution-scope=public) Feb 23 03:48:53 localhost podman[100638]: 2026-02-23 08:48:53.387771195 +0000 UTC m=+0.438214442 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:48:53 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:48:53 localhost podman[100629]: 2026-02-23 08:48:53.449976553 +0000 UTC m=+0.501031399 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1766032510, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64) Feb 23 03:48:53 localhost podman[100629]: 2026-02-23 08:48:53.471329551 +0000 UTC m=+0.522384407 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute) Feb 23 03:48:53 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:48:53 localhost podman[100627]: 2026-02-23 08:48:53.515381959 +0000 UTC m=+0.558296604 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:48:53 localhost podman[100627]: 2026-02-23 08:48:53.524715727 +0000 UTC m=+0.567630412 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T22:34:43Z, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git) Feb 23 03:48:53 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:48:53 localhost podman[100619]: 2026-02-23 08:48:53.554708022 +0000 UTC m=+0.610903367 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, version=17.1.13) Feb 23 03:48:53 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:49:00 localhost podman[100890]: 2026-02-23 08:49:00.009838886 +0000 UTC m=+0.084483205 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:49:00 localhost podman[100890]: 2026-02-23 08:49:00.223733982 +0000 UTC m=+0.298378231 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, tcib_managed=true, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:49:00 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:49:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:49:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5036 writes, 22K keys, 5036 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5036 writes, 634 syncs, 7.94 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:49:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:49:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5650 writes, 24K keys, 5650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5650 writes, 811 syncs, 6.97 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:49:06 localhost sshd[100919]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:49:24 localhost podman[100921]: 2026-02-23 08:49:24.022532777 +0000 UTC m=+0.089676936 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, architecture=x86_64) Feb 23 03:49:24 localhost systemd[1]: tmp-crun.YzXGCd.mount: Deactivated successfully. Feb 23 03:49:24 localhost podman[100927]: 2026-02-23 08:49:24.08553184 +0000 UTC m=+0.134491058 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1766032510) Feb 23 03:49:24 localhost podman[100921]: 2026-02-23 08:49:24.088652396 +0000 UTC m=+0.155796555 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., container_name=collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Feb 23 03:49:24 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:49:24 localhost podman[100950]: 2026-02-23 08:49:24.140623888 +0000 UTC m=+0.191042291 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.13, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public) Feb 23 03:49:24 localhost podman[100927]: 2026-02-23 08:49:24.134805228 +0000 UTC m=+0.183764436 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, release=1766032510, vendor=Red Hat, Inc.) Feb 23 03:49:24 localhost podman[100941]: 2026-02-23 08:49:24.040688047 +0000 UTC m=+0.091107400 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.41.5, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vcs-type=git, distribution-scope=public) Feb 23 03:49:24 localhost podman[100944]: 2026-02-23 08:49:24.151355109 +0000 UTC m=+0.196856250 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:49:24 localhost podman[100941]: 2026-02-23 08:49:24.173218513 +0000 UTC m=+0.223637826 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, tcib_managed=true) Feb 23 03:49:24 localhost podman[100925]: 2026-02-23 08:49:24.18026695 +0000 UTC m=+0.234368626 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1766032510, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64) Feb 23 03:49:24 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:49:24 localhost podman[100950]: 2026-02-23 08:49:24.184766329 +0000 UTC m=+0.235184712 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:49:24 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:49:24 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:49:24 localhost podman[100923]: 2026-02-23 08:49:24.059560878 +0000 UTC m=+0.126308435 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vcs-type=git, url=https://www.redhat.com, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible) Feb 23 03:49:24 localhost podman[100925]: 2026-02-23 08:49:24.216756885 +0000 UTC m=+0.270858561 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64) Feb 23 03:49:24 localhost podman[100944]: 2026-02-23 08:49:24.223742361 +0000 UTC m=+0.269243502 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, vcs-type=git, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4) Feb 23 03:49:24 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:49:24 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:49:24 localhost podman[100922]: 2026-02-23 08:49:24.072729065 +0000 UTC m=+0.138050657 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, release=1766032510, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 23 03:49:24 localhost podman[100924]: 2026-02-23 08:49:24.287859298 +0000 UTC m=+0.347403122 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5) Feb 23 03:49:24 localhost podman[100924]: 2026-02-23 08:49:24.300711374 +0000 UTC m=+0.360255168 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=) Feb 23 03:49:24 localhost podman[100924]: unhealthy Feb 23 03:49:24 localhost podman[100922]: 2026-02-23 08:49:24.308734271 +0000 UTC m=+0.374055873 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public) Feb 23 03:49:24 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:49:24 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:49:24 localhost podman[100922]: unhealthy Feb 23 03:49:24 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:49:24 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:49:24 localhost podman[100923]: 2026-02-23 08:49:24.452901787 +0000 UTC m=+0.519649434 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, architecture=x86_64, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510) Feb 23 03:49:24 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:49:31 localhost podman[101118]: 2026-02-23 08:49:31.015614648 +0000 UTC m=+0.089047246 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, release=1766032510, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 03:49:31 localhost podman[101118]: 2026-02-23 08:49:31.192720139 +0000 UTC m=+0.266152657 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, vcs-type=git, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:49:31 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:49:37 localhost sshd[101148]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:49:55 localhost podman[101171]: 2026-02-23 08:49:55.074907307 +0000 UTC m=+0.097589500 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc.) Feb 23 03:49:55 localhost podman[101160]: 2026-02-23 08:49:55.110138993 +0000 UTC m=+0.140455671 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:49:55 localhost podman[101152]: 2026-02-23 08:49:55.116707256 +0000 UTC m=+0.152476332 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, release=1766032510) Feb 23 03:49:55 localhost podman[101183]: 2026-02-23 08:49:55.12366758 +0000 UTC m=+0.138978565 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, container_name=logrotate_crond, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.component=openstack-cron-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true) Feb 23 03:49:55 localhost podman[101150]: 2026-02-23 08:49:55.160130745 +0000 UTC m=+0.203499066 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, version=17.1.13, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible) Feb 23 03:49:55 localhost podman[101171]: 2026-02-23 08:49:55.165057817 +0000 UTC m=+0.187740000 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step5, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc.) Feb 23 03:49:55 localhost podman[101186]: 2026-02-23 08:49:55.172182816 +0000 UTC m=+0.183473518 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:49:55 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:49:55 localhost podman[101160]: 2026-02-23 08:49:55.193160413 +0000 UTC m=+0.223477121 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, architecture=x86_64, version=17.1.13, distribution-scope=public, release=1766032510, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:49:55 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:49:55 localhost podman[101151]: 2026-02-23 08:49:55.214507752 +0000 UTC m=+0.252452146 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.5, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, container_name=ovn_controller, tcib_managed=true, release=1766032510, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:49:55 localhost podman[101150]: 2026-02-23 08:49:55.219837885 +0000 UTC m=+0.263206216 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:49:55 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:49:55 localhost podman[101154]: 2026-02-23 08:49:55.234689644 +0000 UTC m=+0.268545661 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team) Feb 23 03:49:55 localhost podman[101186]: 2026-02-23 08:49:55.273041156 +0000 UTC m=+0.284331948 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:49:55 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:49:55 localhost podman[101151]: 2026-02-23 08:49:55.297890442 +0000 UTC m=+0.335834856 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:49:55 localhost podman[101151]: unhealthy Feb 23 03:49:55 localhost podman[101183]: 2026-02-23 08:49:55.310055248 +0000 UTC m=+0.325366303 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, version=17.1.13, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:49:55 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:49:55 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:49:55 localhost podman[101185]: 2026-02-23 08:49:55.276508343 +0000 UTC m=+0.293397447 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4) Feb 23 03:49:55 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:49:55 localhost podman[101185]: 2026-02-23 08:49:55.359958135 +0000 UTC m=+0.376847219 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible) Feb 23 03:49:55 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:49:55 localhost podman[101154]: 2026-02-23 08:49:55.379219129 +0000 UTC m=+0.413075146 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 23 03:49:55 localhost podman[101154]: unhealthy Feb 23 03:49:55 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:49:55 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:49:55 localhost podman[101152]: 2026-02-23 08:49:55.458021979 +0000 UTC m=+0.493791125 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, url=https://www.redhat.com, version=17.1.13, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, distribution-scope=public, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Feb 23 03:49:55 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:50:02 localhost podman[101425]: 2026-02-23 08:50:02.013225249 +0000 UTC m=+0.090183641 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.13, tcib_managed=true, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z) Feb 23 03:50:02 localhost podman[101425]: 2026-02-23 08:50:02.199897465 +0000 UTC m=+0.276855887 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, tcib_managed=true, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:50:02 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:50:25 localhost sshd[101454]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:50:25 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:50:25 localhost recover_tripleo_nova_virtqemud[101476]: 62457 Feb 23 03:50:25 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:50:25 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:50:25 localhost podman[101457]: 2026-02-23 08:50:25.408530581 +0000 UTC m=+0.105238866 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Feb 23 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:50:25 localhost podman[101500]: 2026-02-23 08:50:25.424311527 +0000 UTC m=+0.066615494 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, managed_by=tripleo_ansible) Feb 23 03:50:25 localhost podman[101500]: 2026-02-23 08:50:25.4311893 +0000 UTC m=+0.073493247 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:50:25 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:50:25 localhost podman[101457]: 2026-02-23 08:50:25.485848475 +0000 UTC m=+0.182556730 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, container_name=nova_compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:50:25 localhost podman[101499]: 2026-02-23 08:50:25.495497393 +0000 UTC m=+0.138752470 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, vendor=Red Hat, Inc.) Feb 23 03:50:25 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:50:25 localhost podman[101456]: 2026-02-23 08:50:25.501287601 +0000 UTC m=+0.198888423 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, container_name=iscsid) Feb 23 03:50:25 localhost podman[101456]: 2026-02-23 08:50:25.53531086 +0000 UTC m=+0.232911682 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Feb 23 03:50:25 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:50:25 localhost podman[101530]: 2026-02-23 08:50:25.551227921 +0000 UTC m=+0.134646122 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., version=17.1.13, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Feb 23 03:50:25 localhost podman[101533]: 2026-02-23 08:50:25.535165816 +0000 UTC m=+0.117468593 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, version=17.1.13, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:50:25 localhost podman[101455]: 2026-02-23 08:50:25.599663264 +0000 UTC m=+0.297336398 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, tcib_managed=true, architecture=x86_64, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z) Feb 23 03:50:25 localhost podman[101533]: 2026-02-23 08:50:25.615445901 +0000 UTC m=+0.197748688 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_metadata_agent, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible) Feb 23 03:50:25 localhost podman[101533]: unhealthy Feb 23 03:50:25 localhost podman[101530]: 2026-02-23 08:50:25.623685285 +0000 UTC m=+0.207103536 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:50:25 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:50:25 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:50:25 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:50:25 localhost podman[101571]: 2026-02-23 08:50:25.669352813 +0000 UTC m=+0.170594961 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, url=https://www.redhat.com, container_name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 23 03:50:25 localhost podman[101455]: 2026-02-23 08:50:25.686986366 +0000 UTC m=+0.384659540 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Feb 23 03:50:25 localhost podman[101475]: 2026-02-23 08:50:25.700662398 +0000 UTC m=+0.388972324 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:50:25 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:50:25 localhost podman[101499]: 2026-02-23 08:50:25.730805478 +0000 UTC m=+0.374060655 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Feb 23 03:50:25 localhost podman[101499]: unhealthy Feb 23 03:50:25 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:50:25 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:50:25 localhost podman[101475]: 2026-02-23 08:50:25.749862815 +0000 UTC m=+0.438172751 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 23 03:50:25 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:50:26 localhost podman[101571]: 2026-02-23 08:50:26.007807598 +0000 UTC m=+0.509049686 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, container_name=nova_migration_target, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.13, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Feb 23 03:50:26 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:50:26 localhost systemd[1]: tmp-crun.HcEdXw.mount: Deactivated successfully. Feb 23 03:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:50:33 localhost systemd[1]: tmp-crun.CfBql9.mount: Deactivated successfully. Feb 23 03:50:33 localhost podman[101655]: 2026-02-23 08:50:33.004863582 +0000 UTC m=+0.080354888 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:50:33 localhost podman[101655]: 2026-02-23 08:50:33.19485247 +0000 UTC m=+0.270343846 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_id=tripleo_step1, tcib_managed=true) Feb 23 03:50:33 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:50:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:50:56 localhost podman[101682]: 2026-02-23 08:50:56.0413871 +0000 UTC m=+0.105196615 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:50:56 localhost podman[101691]: 2026-02-23 08:50:56.102221475 +0000 UTC m=+0.152740169 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:50:56 localhost podman[101682]: 2026-02-23 08:50:56.122753049 +0000 UTC m=+0.186562524 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc.) Feb 23 03:50:56 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:50:56 localhost podman[101704]: 2026-02-23 08:50:56.168124058 +0000 UTC m=+0.214736592 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1766032510, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Feb 23 03:50:56 localhost podman[101683]: 2026-02-23 08:50:56.066952148 +0000 UTC m=+0.126963486 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, com.redhat.component=openstack-ovn-controller-container) Feb 23 03:50:56 localhost podman[101691]: 2026-02-23 08:50:56.173527044 +0000 UTC m=+0.224045738 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:50:56 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:50:56 localhost podman[101684]: 2026-02-23 08:50:56.135728098 +0000 UTC m=+0.193368263 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent) Feb 23 03:50:56 localhost podman[101685]: 2026-02-23 08:50:56.08646978 +0000 UTC m=+0.136530371 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:50:56 localhost podman[101685]: 2026-02-23 08:50:56.217502899 +0000 UTC m=+0.267563530 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, container_name=iscsid, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container) Feb 23 03:50:56 localhost podman[101704]: 2026-02-23 08:50:56.253775518 +0000 UTC m=+0.300388072 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:50:56 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:50:56 localhost podman[101706]: 2026-02-23 08:50:56.263642613 +0000 UTC m=+0.301858169 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, batch=17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:50:56 localhost podman[101684]: 2026-02-23 08:50:56.268832483 +0000 UTC m=+0.326472668 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, tcib_managed=true) Feb 23 03:50:56 localhost podman[101776]: 2026-02-23 08:50:56.218847221 +0000 UTC m=+0.151519872 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5) Feb 23 03:50:56 localhost podman[101706]: 2026-02-23 08:50:56.285821637 +0000 UTC m=+0.324037233 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., version=17.1.13, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 23 03:50:56 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:50:56 localhost podman[101683]: 2026-02-23 08:50:56.305756951 +0000 UTC m=+0.365768249 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:50:56 localhost podman[101683]: unhealthy Feb 23 03:50:56 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:50:56 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:50:56 localhost podman[101684]: unhealthy Feb 23 03:50:56 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:50:56 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:50:56 localhost podman[101696]: 2026-02-23 08:50:56.256009217 +0000 UTC m=+0.306675766 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:50:56 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:50:56 localhost podman[101696]: 2026-02-23 08:50:56.388818932 +0000 UTC m=+0.439485501 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, container_name=logrotate_crond, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:50:56 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:50:56 localhost podman[101776]: 2026-02-23 08:50:56.577889351 +0000 UTC m=+0.510562032 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_migration_target, vcs-type=git, version=17.1.13, config_id=tripleo_step4, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 03:50:56 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:51:04 localhost podman[101997]: 2026-02-23 08:51:04.007625115 +0000 UTC m=+0.079937605 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.41.5, url=https://www.redhat.com) Feb 23 03:51:04 localhost podman[101997]: 2026-02-23 08:51:04.197017385 +0000 UTC m=+0.269329885 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5) Feb 23 03:51:04 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:51:13 localhost sshd[102027]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:51:27 localhost podman[102030]: 2026-02-23 08:51:27.025668982 +0000 UTC m=+0.094598248 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, build-date=2026-01-12T22:36:40Z, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:51:27 localhost podman[102048]: 2026-02-23 08:51:27.044634707 +0000 UTC m=+0.102041008 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13) Feb 23 03:51:27 localhost podman[102049]: 2026-02-23 08:51:27.103553714 +0000 UTC m=+0.143526957 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, config_id=tripleo_step5, distribution-scope=public, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, url=https://www.redhat.com, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=) Feb 23 03:51:27 localhost podman[102037]: 2026-02-23 08:51:27.061373833 +0000 UTC m=+0.120589999 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, io.buildah.version=1.41.5) Feb 23 03:51:27 localhost podman[102037]: 2026-02-23 08:51:27.145867118 +0000 UTC m=+0.205083304 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T22:56:19Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:51:27 localhost podman[102037]: unhealthy Feb 23 03:51:27 localhost podman[102049]: 2026-02-23 08:51:27.154154204 +0000 UTC m=+0.194127447 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_id=tripleo_step5, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 23 03:51:27 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:51:27 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:51:27 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:51:27 localhost podman[102048]: 2026-02-23 08:51:27.178443132 +0000 UTC m=+0.235849433 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, container_name=iscsid, url=https://www.redhat.com, release=1766032510, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, build-date=2026-01-12T22:34:43Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.13, config_id=tripleo_step3, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 23 03:51:27 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:51:27 localhost podman[102029]: 2026-02-23 08:51:27.148701046 +0000 UTC m=+0.221153520 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, container_name=collectd, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.13, config_id=tripleo_step3, vcs-type=git) Feb 23 03:51:27 localhost podman[102030]: 2026-02-23 08:51:27.205909509 +0000 UTC m=+0.274838795 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Feb 23 03:51:27 localhost podman[102030]: unhealthy Feb 23 03:51:27 localhost podman[102060]: 2026-02-23 08:51:27.217633 +0000 UTC m=+0.260915565 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, release=1766032510, architecture=x86_64, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, version=17.1.13) Feb 23 03:51:27 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:51:27 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:51:27 localhost podman[102060]: 2026-02-23 08:51:27.226845065 +0000 UTC m=+0.270127580 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, release=1766032510, vcs-type=git, version=17.1.13, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public) Feb 23 03:51:27 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:51:27 localhost podman[102031]: 2026-02-23 08:51:27.078654906 +0000 UTC m=+0.148816339 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container) Feb 23 03:51:27 localhost podman[102062]: 2026-02-23 08:51:27.284340637 +0000 UTC m=+0.320870733 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 23 03:51:27 localhost podman[102029]: 2026-02-23 08:51:27.28766415 +0000 UTC m=+0.360116634 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.13, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3) Feb 23 03:51:27 localhost podman[102061]: 2026-02-23 08:51:27.296474912 +0000 UTC m=+0.341528541 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:51:27 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:51:27 localhost podman[102062]: 2026-02-23 08:51:27.339861509 +0000 UTC m=+0.376391615 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, version=17.1.13, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:51:27 localhost podman[102061]: 2026-02-23 08:51:27.346975939 +0000 UTC m=+0.392029548 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:51:27 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:51:27 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:51:27 localhost podman[102031]: 2026-02-23 08:51:27.431873786 +0000 UTC m=+0.502035199 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=nova_migration_target, url=https://www.redhat.com, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Feb 23 03:51:27 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:51:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:51:34 localhost recover_tripleo_nova_virtqemud[102227]: 62457 Feb 23 03:51:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:51:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:51:35 localhost podman[102220]: 2026-02-23 08:51:35.019786218 +0000 UTC m=+0.093767442 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:51:35 localhost podman[102220]: 2026-02-23 08:51:35.245863218 +0000 UTC m=+0.319844442 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, release=1766032510, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public) Feb 23 03:51:35 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:51:50 localhost sshd[102251]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:51:51 localhost sshd[102253]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:51:58 localhost systemd[1]: tmp-crun.l8gC5Y.mount: Deactivated successfully. Feb 23 03:51:58 localhost podman[102255]: 2026-02-23 08:51:58.032571482 +0000 UTC m=+0.101935364 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, tcib_managed=true, version=17.1.13, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:51:58 localhost podman[102255]: 2026-02-23 08:51:58.06526073 +0000 UTC m=+0.134624642 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5) Feb 23 03:51:58 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:51:58 localhost podman[102256]: 2026-02-23 08:51:58.102028364 +0000 UTC m=+0.167278349 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:51:58 localhost podman[102258]: 2026-02-23 08:51:58.105536631 +0000 UTC m=+0.161636513 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, config_id=tripleo_step4, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 23 03:51:58 localhost podman[102289]: 2026-02-23 08:51:58.082256074 +0000 UTC m=+0.128735090 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=) Feb 23 03:51:58 localhost podman[102258]: 2026-02-23 08:51:58.140014815 +0000 UTC m=+0.196114717 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:51:58 localhost podman[102258]: unhealthy Feb 23 03:51:58 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:51:58 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:51:58 localhost podman[102256]: 2026-02-23 08:51:58.165773719 +0000 UTC m=+0.231023694 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:51:58 localhost podman[102256]: unhealthy Feb 23 03:51:58 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:51:58 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:51:58 localhost podman[102270]: 2026-02-23 08:51:58.265837175 +0000 UTC m=+0.309238426 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc.) Feb 23 03:51:58 localhost podman[102289]: 2026-02-23 08:51:58.268790206 +0000 UTC m=+0.315269192 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:51:58 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:51:58 localhost podman[102277]: 2026-02-23 08:51:58.180494143 +0000 UTC m=+0.228291389 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible) Feb 23 03:51:58 localhost podman[102270]: 2026-02-23 08:51:58.290852105 +0000 UTC m=+0.334253386 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, tcib_managed=true, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:51:58 localhost podman[102264]: 2026-02-23 08:51:58.141468569 +0000 UTC m=+0.194105755 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 23 03:51:58 localhost podman[102278]: 2026-02-23 08:51:58.250583184 +0000 UTC m=+0.300023001 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13) Feb 23 03:51:58 localhost podman[102277]: 2026-02-23 08:51:58.310981166 +0000 UTC m=+0.358778402 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, vcs-type=git, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, tcib_managed=true, build-date=2026-01-12T22:10:15Z, distribution-scope=public) Feb 23 03:51:58 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:51:58 localhost podman[102278]: 2026-02-23 08:51:58.329809926 +0000 UTC m=+0.379249673 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:51:58 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:51:58 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:51:58 localhost podman[102257]: 2026-02-23 08:51:58.358929365 +0000 UTC m=+0.423300293 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:51:58 localhost podman[102264]: 2026-02-23 08:51:58.371162022 +0000 UTC m=+0.423799208 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, batch=17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, container_name=iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 23 03:51:58 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:51:58 localhost podman[102257]: 2026-02-23 08:51:58.727088626 +0000 UTC m=+0.791459584 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4) Feb 23 03:51:58 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:52:01 localhost sshd[102451]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:52:06 localhost systemd[1]: tmp-crun.wou5KY.mount: Deactivated successfully. Feb 23 03:52:06 localhost podman[102529]: 2026-02-23 08:52:06.013797741 +0000 UTC m=+0.088546601 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, url=https://www.redhat.com, config_id=tripleo_step1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510) Feb 23 03:52:06 localhost podman[102529]: 2026-02-23 08:52:06.184876365 +0000 UTC m=+0.259625265 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Feb 23 03:52:06 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:52:09 localhost sshd[102559]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:52:25 localhost systemd[1]: Starting dnf makecache... Feb 23 03:52:26 localhost dnf[102561]: Updating Subscription Management repositories. Feb 23 03:52:27 localhost dnf[102561]: Metadata cache refreshed recently. Feb 23 03:52:28 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Feb 23 03:52:28 localhost systemd[1]: Finished dnf makecache. Feb 23 03:52:28 localhost systemd[1]: dnf-makecache.service: Consumed 2.095s CPU time. Feb 23 03:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:52:29 localhost podman[102570]: 2026-02-23 08:52:29.026603694 +0000 UTC m=+0.088315483 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z) Feb 23 03:52:29 localhost podman[102562]: 2026-02-23 08:52:29.079566127 +0000 UTC m=+0.153483543 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, release=1766032510, version=17.1.13, com.redhat.component=openstack-collectd-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:52:29 localhost podman[102572]: 2026-02-23 08:52:29.0896903 +0000 UTC m=+0.148597493 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, name=rhosp-rhel9/openstack-iscsid, version=17.1.13) Feb 23 03:52:29 localhost podman[102591]: 2026-02-23 08:52:29.120765978 +0000 UTC m=+0.170594771 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., vcs-type=git, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13) Feb 23 03:52:29 localhost podman[102572]: 2026-02-23 08:52:29.12571925 +0000 UTC m=+0.184626413 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, architecture=x86_64, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:52:29 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:52:29 localhost podman[102591]: 2026-02-23 08:52:29.140017961 +0000 UTC m=+0.189846764 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.13) Feb 23 03:52:29 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:52:29 localhost podman[102586]: 2026-02-23 08:52:29.130493207 +0000 UTC m=+0.174967745 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=) Feb 23 03:52:29 localhost podman[102593]: 2026-02-23 08:52:29.193698346 +0000 UTC m=+0.239202796 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:52:29 localhost podman[102586]: 2026-02-23 08:52:29.213976411 +0000 UTC m=+0.258451019 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1766032510, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, vcs-type=git) Feb 23 03:52:29 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:52:29 localhost podman[102562]: 2026-02-23 08:52:29.26613936 +0000 UTC m=+0.340056786 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, batch=17.1_20260112.1, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:52:29 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:52:29 localhost podman[102563]: 2026-02-23 08:52:29.306167394 +0000 UTC m=+0.375098416 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:52:29 localhost podman[102563]: 2026-02-23 08:52:29.317990528 +0000 UTC m=+0.386921540 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 23 03:52:29 localhost podman[102593]: 2026-02-23 08:52:29.320152405 +0000 UTC m=+0.365656855 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z) Feb 23 03:52:29 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:52:29 localhost podman[102577]: 2026-02-23 08:52:29.399934064 +0000 UTC m=+0.442549315 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, batch=17.1_20260112.1, release=1766032510, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, config_id=tripleo_step5, tcib_managed=true, io.buildah.version=1.41.5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=nova_compute, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 03:52:29 localhost podman[102564]: 2026-02-23 08:52:29.063468611 +0000 UTC m=+0.129174844 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, build-date=2026-01-12T23:32:04Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git) Feb 23 03:52:29 localhost podman[102563]: unhealthy Feb 23 03:52:29 localhost podman[102577]: 2026-02-23 08:52:29.427737902 +0000 UTC m=+0.470353163 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step5) Feb 23 03:52:29 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:52:29 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:52:29 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:52:29 localhost podman[102564]: 2026-02-23 08:52:29.460462741 +0000 UTC m=+0.526168974 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:52:29 localhost podman[102570]: 2026-02-23 08:52:29.472624646 +0000 UTC m=+0.534336445 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:52:29 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:52:29 localhost podman[102570]: unhealthy Feb 23 03:52:29 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:52:29 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:52:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:52:37 localhost podman[102749]: 2026-02-23 08:52:37.012610041 +0000 UTC m=+0.087583331 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, version=17.1.13, vendor=Red Hat, Inc., config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=metrics_qdr) Feb 23 03:52:37 localhost podman[102749]: 2026-02-23 08:52:37.201247126 +0000 UTC m=+0.276220426 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, vcs-type=git) Feb 23 03:52:37 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:52:50 localhost sshd[102778]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:53:00 localhost podman[102781]: 2026-02-23 08:53:00.047087614 +0000 UTC m=+0.113588933 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Feb 23 03:53:00 localhost podman[102781]: 2026-02-23 08:53:00.082793155 +0000 UTC m=+0.149294474 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, container_name=ovn_controller, vcs-type=git, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, release=1766032510, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller) Feb 23 03:53:00 localhost podman[102781]: unhealthy Feb 23 03:53:00 localhost podman[102793]: 2026-02-23 08:53:00.088754408 +0000 UTC m=+0.142151844 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step4) Feb 23 03:53:00 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:53:00 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:53:00 localhost podman[102793]: 2026-02-23 08:53:00.10080199 +0000 UTC m=+0.154199416 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, batch=17.1_20260112.1, version=17.1.13, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64) Feb 23 03:53:00 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:53:00 localhost podman[102803]: 2026-02-23 08:53:00.149820291 +0000 UTC m=+0.206466467 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 03:53:00 localhost podman[102780]: 2026-02-23 08:53:00.193102975 +0000 UTC m=+0.263159715 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, version=17.1.13) Feb 23 03:53:00 localhost podman[102780]: 2026-02-23 08:53:00.202701192 +0000 UTC m=+0.272757882 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:53:00 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:53:00 localhost podman[102810]: 2026-02-23 08:53:00.252625162 +0000 UTC m=+0.305145190 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible) Feb 23 03:53:00 localhost podman[102803]: 2026-02-23 08:53:00.276328402 +0000 UTC m=+0.332974638 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container) Feb 23 03:53:00 localhost podman[102785]: 2026-02-23 08:53:00.234556104 +0000 UTC m=+0.285614097 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, url=https://www.redhat.com) Feb 23 03:53:00 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:53:00 localhost podman[102783]: 2026-02-23 08:53:00.336097815 +0000 UTC m=+0.402588404 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:53:00 localhost podman[102784]: 2026-02-23 08:53:00.351699116 +0000 UTC m=+0.406351920 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, tcib_managed=true, container_name=iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:53:00 localhost podman[102784]: 2026-02-23 08:53:00.363671885 +0000 UTC m=+0.418324679 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:53:00 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:53:00 localhost podman[102783]: 2026-02-23 08:53:00.378279846 +0000 UTC m=+0.444770425 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:53:00 localhost podman[102810]: 2026-02-23 08:53:00.379139142 +0000 UTC m=+0.431659220 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.5) Feb 23 03:53:00 localhost podman[102783]: unhealthy Feb 23 03:53:00 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:53:00 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:53:00 localhost podman[102782]: 2026-02-23 08:53:00.437883464 +0000 UTC m=+0.501887706 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:53:00 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:53:00 localhost podman[102785]: 2026-02-23 08:53:00.471019716 +0000 UTC m=+0.522077709 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Feb 23 03:53:00 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:53:00 localhost podman[102782]: 2026-02-23 08:53:00.820098768 +0000 UTC m=+0.884103050 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:32:04Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container) Feb 23 03:53:00 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:53:05 localhost podman[103075]: 2026-02-23 08:53:05.284330041 +0000 UTC m=+0.083973240 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, version=7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:53:05 localhost podman[103075]: 2026-02-23 08:53:05.387802391 +0000 UTC m=+0.187445560 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347) Feb 23 03:53:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:53:09 localhost kernel: DROPPING: IN=vlan20 OUT= MACSRC=8a:da:b9:72:ad:79 MACDST=76:31:9b:68:66:c0 MACPROTO=0800 SRC=172.17.0.103 DST=172.17.0.107 LEN=40 TOS=0x00 PREC=0xC0 TTL=64 ID=0 DF PROTO=TCP SPT=6642 DPT=44110 SEQ=0 ACK=429685524 WINDOW=0 RES=0x00 ACK RST URGP=0 Feb 23 03:53:09 localhost systemd[1]: tmp-crun.TvKS6H.mount: Deactivated successfully. Feb 23 03:53:09 localhost podman[103208]: 2026-02-23 08:53:09.651349886 +0000 UTC m=+1.732876510 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:53:10 localhost podman[103208]: 2026-02-23 08:53:10.170891664 +0000 UTC m=+2.252418278 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:53:10 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:53:31 localhost podman[103252]: 2026-02-23 08:53:31.030934565 +0000 UTC m=+0.101331385 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, batch=17.1_20260112.1) Feb 23 03:53:31 localhost podman[103260]: 2026-02-23 08:53:31.09208274 +0000 UTC m=+0.153617387 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 23 03:53:31 localhost podman[103296]: 2026-02-23 08:53:31.046874707 +0000 UTC m=+0.081149273 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, tcib_managed=true, io.buildah.version=1.41.5, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Feb 23 03:53:31 localhost podman[103252]: 2026-02-23 08:53:31.110631433 +0000 UTC m=+0.181028263 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z) Feb 23 03:53:31 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:53:31 localhost podman[103262]: 2026-02-23 08:53:31.146338714 +0000 UTC m=+0.201423352 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:53:31 localhost podman[103271]: 2026-02-23 08:53:31.151313566 +0000 UTC m=+0.202711240 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:53:31 localhost podman[103260]: 2026-02-23 08:53:31.178358211 +0000 UTC m=+0.239892848 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:53:31 localhost podman[103260]: unhealthy Feb 23 03:53:31 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:53:31 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:53:31 localhost podman[103253]: 2026-02-23 08:53:31.191484976 +0000 UTC m=+0.256995375 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, architecture=x86_64, container_name=ovn_controller, managed_by=tripleo_ansible, io.buildah.version=1.41.5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:53:31 localhost podman[103262]: 2026-02-23 08:53:31.204110835 +0000 UTC m=+0.259195523 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:53:31 localhost podman[103289]: 2026-02-23 08:53:31.063073856 +0000 UTC m=+0.101891812 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 23 03:53:31 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:53:31 localhost podman[103289]: 2026-02-23 08:53:31.243840139 +0000 UTC m=+0.282658095 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 03:53:31 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:53:31 localhost podman[103279]: 2026-02-23 08:53:31.254098156 +0000 UTC m=+0.301616031 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, container_name=logrotate_crond, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron) Feb 23 03:53:31 localhost podman[103271]: 2026-02-23 08:53:31.267731746 +0000 UTC m=+0.319129490 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=nova_compute, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, version=17.1.13, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20260112.1) Feb 23 03:53:31 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:53:31 localhost podman[103254]: 2026-02-23 08:53:31.310182725 +0000 UTC m=+0.370803953 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:53:31 localhost podman[103296]: 2026-02-23 08:53:31.32880848 +0000 UTC m=+0.363083016 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, version=17.1.13, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:53:31 localhost podman[103253]: 2026-02-23 08:53:31.330897824 +0000 UTC m=+0.396408223 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.5, container_name=ovn_controller, url=https://www.redhat.com) Feb 23 03:53:31 localhost podman[103253]: unhealthy Feb 23 03:53:31 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:53:31 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:53:31 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:53:31 localhost podman[103279]: 2026-02-23 08:53:31.43453185 +0000 UTC m=+0.482049675 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, tcib_managed=true, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:53:31 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:53:31 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:53:31 localhost recover_tripleo_nova_virtqemud[103445]: 62457 Feb 23 03:53:31 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:53:31 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:53:31 localhost podman[103254]: 2026-02-23 08:53:31.730195365 +0000 UTC m=+0.790816593 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, batch=17.1_20260112.1, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.buildah.version=1.41.5) Feb 23 03:53:31 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:53:39 localhost sshd[103447]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:53:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:53:40 localhost podman[103449]: 2026-02-23 08:53:40.996233927 +0000 UTC m=+0.075879970 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510) Feb 23 03:53:41 localhost podman[103449]: 2026-02-23 08:53:41.20681589 +0000 UTC m=+0.286461903 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1) Feb 23 03:53:41 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:54:02 localhost systemd[1]: tmp-crun.GDHMUx.mount: Deactivated successfully. Feb 23 03:54:02 localhost podman[103493]: 2026-02-23 08:54:02.074339941 +0000 UTC m=+0.121603211 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:54:02 localhost podman[103493]: 2026-02-23 08:54:02.083780041 +0000 UTC m=+0.131043301 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, vcs-type=git, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, release=1766032510, url=https://www.redhat.com, config_id=tripleo_step3, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team) Feb 23 03:54:02 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:54:02 localhost podman[103499]: 2026-02-23 08:54:02.052853488 +0000 UTC m=+0.096066163 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Feb 23 03:54:02 localhost podman[103499]: 2026-02-23 08:54:02.134638719 +0000 UTC m=+0.177851374 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, release=1766032510, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team) Feb 23 03:54:02 localhost podman[103505]: 2026-02-23 08:54:02.145167514 +0000 UTC m=+0.185360266 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, distribution-scope=public) Feb 23 03:54:02 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:54:02 localhost podman[103481]: 2026-02-23 08:54:02.153316446 +0000 UTC m=+0.207726936 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=) Feb 23 03:54:02 localhost podman[103481]: 2026-02-23 08:54:02.15898896 +0000 UTC m=+0.213399460 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, batch=17.1_20260112.1, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:54:02 localhost podman[103481]: unhealthy Feb 23 03:54:02 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:54:02 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:54:02 localhost podman[103478]: 2026-02-23 08:54:02.033282984 +0000 UTC m=+0.097503978 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:54:02 localhost podman[103480]: 2026-02-23 08:54:02.087637861 +0000 UTC m=+0.144685842 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:54:02 localhost podman[103479]: 2026-02-23 08:54:02.193758622 +0000 UTC m=+0.256103767 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, build-date=2026-01-12T22:36:40Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:54:02 localhost podman[103479]: 2026-02-23 08:54:02.208751515 +0000 UTC m=+0.271096640 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=ovn_controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:54:02 localhost podman[103479]: unhealthy Feb 23 03:54:02 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:54:02 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:54:02 localhost podman[103478]: 2026-02-23 08:54:02.217700811 +0000 UTC m=+0.281921804 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, version=17.1.13, container_name=collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, vcs-type=git, release=1766032510) Feb 23 03:54:02 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:54:02 localhost podman[103505]: 2026-02-23 08:54:02.227456342 +0000 UTC m=+0.267649114 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.buildah.version=1.41.5) Feb 23 03:54:02 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:54:02 localhost podman[103518]: 2026-02-23 08:54:02.295060816 +0000 UTC m=+0.326892180 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team) Feb 23 03:54:02 localhost podman[103518]: 2026-02-23 08:54:02.316215138 +0000 UTC m=+0.348046512 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:54:02 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:54:02 localhost podman[103511]: 2026-02-23 08:54:02.353292431 +0000 UTC m=+0.387502318 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true) Feb 23 03:54:02 localhost podman[103511]: 2026-02-23 08:54:02.382841802 +0000 UTC m=+0.417051679 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 23 03:54:02 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:54:02 localhost podman[103480]: 2026-02-23 08:54:02.452891012 +0000 UTC m=+0.509939033 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, batch=17.1_20260112.1, version=17.1.13, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:54:02 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:54:09 localhost kernel: DROPPING: IN=vlan20 OUT= MACSRC=1a:e1:bf:c8:40:06 MACDST=76:31:9b:68:66:c0 MACPROTO=0800 SRC=172.17.0.105 DST=172.17.0.107 LEN=40 TOS=0x00 PREC=0xC0 TTL=64 ID=0 DF PROTO=TCP SPT=6642 DPT=47484 SEQ=0 ACK=1659604407 WINDOW=0 RES=0x00 ACK RST URGP=0 Feb 23 03:54:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:54:11 localhost podman[103683]: 2026-02-23 08:54:11.346774272 +0000 UTC m=+0.045226845 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, release=1766032510, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 23 03:54:11 localhost podman[103683]: 2026-02-23 08:54:11.519727335 +0000 UTC m=+0.218179928 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, config_id=tripleo_step1, distribution-scope=public, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64) Feb 23 03:54:11 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:54:26 localhost sshd[103774]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:54:33 localhost systemd[1]: tmp-crun.adsYzj.mount: Deactivated successfully. Feb 23 03:54:33 localhost podman[103777]: 2026-02-23 08:54:33.04408046 +0000 UTC m=+0.111876031 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64) Feb 23 03:54:33 localhost podman[103777]: 2026-02-23 08:54:33.053364356 +0000 UTC m=+0.121159937 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., container_name=ovn_controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 23 03:54:33 localhost podman[103777]: unhealthy Feb 23 03:54:33 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:54:33 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:54:33 localhost podman[103778]: 2026-02-23 08:54:33.133560869 +0000 UTC m=+0.199072309 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, config_id=tripleo_step4, tcib_managed=true, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, container_name=nova_migration_target) Feb 23 03:54:33 localhost podman[103776]: 2026-02-23 08:54:33.1433402 +0000 UTC m=+0.214124033 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-type=git) Feb 23 03:54:33 localhost podman[103776]: 2026-02-23 08:54:33.149962044 +0000 UTC m=+0.220745867 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 23 03:54:33 localhost podman[103803]: 2026-02-23 08:54:33.105625407 +0000 UTC m=+0.152751151 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, version=17.1.13, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, container_name=ceilometer_agent_compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:54:33 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:54:33 localhost podman[103803]: 2026-02-23 08:54:33.185046316 +0000 UTC m=+0.232172170 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, vendor=Red Hat, Inc.) Feb 23 03:54:33 localhost podman[103802]: 2026-02-23 08:54:33.196688534 +0000 UTC m=+0.243917481 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, name=rhosp-rhel9/openstack-cron, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:54:33 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:54:33 localhost podman[103802]: 2026-02-23 08:54:33.202736892 +0000 UTC m=+0.249965819 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, io.buildah.version=1.41.5, container_name=logrotate_crond, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2026-01-12T22:10:15Z) Feb 23 03:54:33 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:54:33 localhost podman[103780]: 2026-02-23 08:54:33.08561457 +0000 UTC m=+0.147586151 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, tcib_managed=true, vcs-type=git, container_name=ovn_metadata_agent, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Feb 23 03:54:33 localhost podman[103805]: 2026-02-23 08:54:33.250532055 +0000 UTC m=+0.294473230 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, release=1766032510, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:54:33 localhost podman[103805]: 2026-02-23 08:54:33.279928811 +0000 UTC m=+0.323869946 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, architecture=x86_64, tcib_managed=true, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:54:33 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:54:33 localhost podman[103796]: 2026-02-23 08:54:33.2941644 +0000 UTC m=+0.347235637 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, config_id=tripleo_step5, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, build-date=2026-01-12T23:32:04Z, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:54:33 localhost podman[103780]: 2026-02-23 08:54:33.319046657 +0000 UTC m=+0.381018278 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:54:33 localhost podman[103780]: unhealthy Feb 23 03:54:33 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:54:33 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:54:33 localhost podman[103796]: 2026-02-23 08:54:33.374759395 +0000 UTC m=+0.427830662 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, url=https://www.redhat.com, container_name=nova_compute) Feb 23 03:54:33 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:54:33 localhost podman[103787]: 2026-02-23 08:54:33.459333243 +0000 UTC m=+0.509602354 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.13, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Feb 23 03:54:33 localhost podman[103787]: 2026-02-23 08:54:33.468566937 +0000 UTC m=+0.518836078 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, release=1766032510, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 23 03:54:33 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:54:33 localhost podman[103778]: 2026-02-23 08:54:33.528943299 +0000 UTC m=+0.594454769 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.expose-services=, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:54:33 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:54:34 localhost systemd[1]: tmp-crun.Fnf4Yg.mount: Deactivated successfully. Feb 23 03:54:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:54:41 localhost podman[103976]: 2026-02-23 08:54:41.978037464 +0000 UTC m=+0.052980405 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:54:42 localhost podman[103976]: 2026-02-23 08:54:42.176886545 +0000 UTC m=+0.251829546 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5) Feb 23 03:54:42 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:55:04 localhost podman[104005]: 2026-02-23 08:55:04.03896304 +0000 UTC m=+0.112806282 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, vcs-type=git, architecture=x86_64, container_name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:55:04 localhost podman[104010]: 2026-02-23 08:55:04.018711529 +0000 UTC m=+0.084780872 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com) Feb 23 03:55:04 localhost podman[104006]: 2026-02-23 08:55:04.071805308 +0000 UTC m=+0.139762169 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git) Feb 23 03:55:04 localhost podman[104032]: 2026-02-23 08:55:04.050772682 +0000 UTC m=+0.098731040 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron) Feb 23 03:55:04 localhost podman[104033]: 2026-02-23 08:55:04.103287394 +0000 UTC m=+0.149440406 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com) Feb 23 03:55:04 localhost podman[104014]: 2026-02-23 08:55:04.133397528 +0000 UTC m=+0.167765139 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, architecture=x86_64, version=17.1.13) Feb 23 03:55:04 localhost podman[104010]: 2026-02-23 08:55:04.152556616 +0000 UTC m=+0.218625949 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, architecture=x86_64) Feb 23 03:55:04 localhost podman[104010]: unhealthy Feb 23 03:55:04 localhost podman[104006]: 2026-02-23 08:55:04.161979175 +0000 UTC m=+0.229936016 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:55:04 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:55:04 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:55:04 localhost podman[104006]: unhealthy Feb 23 03:55:04 localhost podman[104014]: 2026-02-23 08:55:04.167858025 +0000 UTC m=+0.202225656 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Feb 23 03:55:04 localhost podman[104033]: 2026-02-23 08:55:04.181115392 +0000 UTC m=+0.227268424 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible) Feb 23 03:55:04 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:55:04 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:55:04 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:55:04 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:55:04 localhost podman[104026]: 2026-02-23 08:55:04.154515315 +0000 UTC m=+0.198877833 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13) Feb 23 03:55:04 localhost podman[104037]: 2026-02-23 08:55:04.164060229 +0000 UTC m=+0.204865357 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=) Feb 23 03:55:04 localhost podman[104032]: 2026-02-23 08:55:04.232260331 +0000 UTC m=+0.280218739 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git) Feb 23 03:55:04 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:55:04 localhost podman[104037]: 2026-02-23 08:55:04.250872362 +0000 UTC m=+0.291677540 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 23 03:55:04 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:55:04 localhost podman[104005]: 2026-02-23 08:55:04.285828704 +0000 UTC m=+0.359671976 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:55:04 localhost podman[104007]: 2026-02-23 08:55:04.320216809 +0000 UTC m=+0.388953604 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, release=1766032510, vcs-type=git, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:55:04 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:55:04 localhost podman[104026]: 2026-02-23 08:55:04.388966169 +0000 UTC m=+0.433328677 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, config_id=tripleo_step5, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:55:04 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:55:04 localhost podman[104007]: 2026-02-23 08:55:04.76606279 +0000 UTC m=+0.834799605 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1766032510, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z) Feb 23 03:55:04 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:55:05 localhost sshd[104202]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:55:12 localhost sshd[104204]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:55:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:55:12 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:55:12 localhost recover_tripleo_nova_virtqemud[104209]: 62457 Feb 23 03:55:12 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:55:12 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:55:12 localhost podman[104206]: 2026-02-23 08:55:12.478498405 +0000 UTC m=+0.059123254 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team) Feb 23 03:55:12 localhost podman[104206]: 2026-02-23 08:55:12.622419271 +0000 UTC m=+0.203044110 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, container_name=metrics_qdr, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, maintainer=OpenStack TripleO Team, architecture=x86_64) Feb 23 03:55:12 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:55:12 localhost sshd[104236]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:55:35 localhost podman[104331]: 2026-02-23 08:55:35.034490194 +0000 UTC m=+0.093178190 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1) Feb 23 03:55:35 localhost podman[104317]: 2026-02-23 08:55:35.046252156 +0000 UTC m=+0.117359232 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, release=1766032510, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:55:35 localhost podman[104317]: 2026-02-23 08:55:35.057689337 +0000 UTC m=+0.128796413 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vendor=Red Hat, Inc.) Feb 23 03:55:35 localhost podman[104351]: 2026-02-23 08:55:35.132561753 +0000 UTC m=+0.178936231 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc.) Feb 23 03:55:35 localhost podman[104316]: 2026-02-23 08:55:35.147583665 +0000 UTC m=+0.223749437 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, url=https://www.redhat.com) Feb 23 03:55:35 localhost podman[104316]: 2026-02-23 08:55:35.155618041 +0000 UTC m=+0.231783793 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T22:10:15Z, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:55:35 localhost podman[104331]: 2026-02-23 08:55:35.161476531 +0000 UTC m=+0.220164497 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1) Feb 23 03:55:35 localhost podman[104342]: 2026-02-23 08:55:35.111514138 +0000 UTC m=+0.160670201 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, architecture=x86_64, config_id=tripleo_step4) Feb 23 03:55:35 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:55:35 localhost podman[104329]: 2026-02-23 08:55:35.085121427 +0000 UTC m=+0.144512214 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step3, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid) Feb 23 03:55:35 localhost podman[104342]: 2026-02-23 08:55:35.191667476 +0000 UTC m=+0.240823529 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible) Feb 23 03:55:35 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:55:35 localhost podman[104324]: 2026-02-23 08:55:35.200251361 +0000 UTC m=+0.265135097 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:55:35 localhost podman[104324]: 2026-02-23 08:55:35.210630569 +0000 UTC m=+0.275514325 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com) Feb 23 03:55:35 localhost podman[104324]: unhealthy Feb 23 03:55:35 localhost podman[104329]: 2026-02-23 08:55:35.21587826 +0000 UTC m=+0.275269097 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, release=1766032510, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=iscsid) Feb 23 03:55:35 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:55:35 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:55:35 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:55:35 localhost podman[104351]: 2026-02-23 08:55:35.23217201 +0000 UTC m=+0.278546508 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, release=1766032510, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., version=17.1.13, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:55:35 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:55:35 localhost podman[104318]: 2026-02-23 08:55:35.259478858 +0000 UTC m=+0.307631560 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Feb 23 03:55:35 localhost podman[104317]: unhealthy Feb 23 03:55:35 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:55:35 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:55:35 localhost podman[104337]: 2026-02-23 08:55:35.083477978 +0000 UTC m=+0.138221272 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-type=git, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, version=17.1.13) Feb 23 03:55:35 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:55:35 localhost podman[104337]: 2026-02-23 08:55:35.364725266 +0000 UTC m=+0.419468570 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, distribution-scope=public, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13) Feb 23 03:55:35 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:55:35 localhost podman[104318]: 2026-02-23 08:55:35.666221587 +0000 UTC m=+0.714374349 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 03:55:35 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:55:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:55:42 localhost podman[104513]: 2026-02-23 08:55:42.987606352 +0000 UTC m=+0.063995315 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:55:43 localhost podman[104513]: 2026-02-23 08:55:43.220774587 +0000 UTC m=+0.297163540 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible) Feb 23 03:55:43 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:55:55 localhost sshd[104542]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:56:06 localhost podman[104546]: 2026-02-23 08:56:06.040683891 +0000 UTC m=+0.109310455 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 03:56:06 localhost podman[104547]: 2026-02-23 08:56:06.073296132 +0000 UTC m=+0.133657412 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, version=17.1.13, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, architecture=x86_64, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:56:06 localhost podman[104547]: 2026-02-23 08:56:06.083584058 +0000 UTC m=+0.143945338 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13) Feb 23 03:56:06 localhost podman[104547]: unhealthy Feb 23 03:56:06 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:56:06 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:56:06 localhost systemd[1]: tmp-crun.kbdFNj.mount: Deactivated successfully. Feb 23 03:56:06 localhost podman[104549]: 2026-02-23 08:56:06.136247694 +0000 UTC m=+0.200476892 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step3, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13, container_name=iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1) Feb 23 03:56:06 localhost podman[104549]: 2026-02-23 08:56:06.147305253 +0000 UTC m=+0.211534521 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-12T22:34:43Z) Feb 23 03:56:06 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:56:06 localhost podman[104578]: 2026-02-23 08:56:06.174709333 +0000 UTC m=+0.226961654 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Feb 23 03:56:06 localhost podman[104544]: 2026-02-23 08:56:06.239258654 +0000 UTC m=+0.311932572 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, container_name=collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:56:06 localhost podman[104545]: 2026-02-23 08:56:06.150321766 +0000 UTC m=+0.222120167 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=ovn_controller, batch=17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, architecture=x86_64, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:56:06 localhost podman[104578]: 2026-02-23 08:56:06.254808291 +0000 UTC m=+0.307060612 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Feb 23 03:56:06 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:56:06 localhost podman[104544]: 2026-02-23 08:56:06.27693339 +0000 UTC m=+0.349607288 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vcs-type=git, build-date=2026-01-12T22:10:15Z, distribution-scope=public) Feb 23 03:56:06 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:56:06 localhost podman[104545]: 2026-02-23 08:56:06.331341579 +0000 UTC m=+0.403139920 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 23 03:56:06 localhost podman[104545]: unhealthy Feb 23 03:56:06 localhost podman[104572]: 2026-02-23 08:56:06.33817837 +0000 UTC m=+0.394048152 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Feb 23 03:56:06 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:56:06 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:56:06 localhost podman[104560]: 2026-02-23 08:56:06.381586581 +0000 UTC m=+0.434986527 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:56:06 localhost podman[104572]: 2026-02-23 08:56:06.392910598 +0000 UTC m=+0.448780430 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=) Feb 23 03:56:06 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:56:06 localhost podman[104560]: 2026-02-23 08:56:06.431196593 +0000 UTC m=+0.484596589 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64) Feb 23 03:56:06 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:56:06 localhost podman[104564]: 2026-02-23 08:56:06.447567296 +0000 UTC m=+0.505512162 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step4, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:56:06 localhost podman[104564]: 2026-02-23 08:56:06.460910315 +0000 UTC m=+0.518855181 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, container_name=logrotate_crond, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container) Feb 23 03:56:06 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:56:06 localhost podman[104546]: 2026-02-23 08:56:06.516687276 +0000 UTC m=+0.585313840 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 23 03:56:06 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:56:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:56:13 localhost podman[104738]: 2026-02-23 08:56:13.980937686 +0000 UTC m=+0.059787645 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, tcib_managed=true, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 23 03:56:14 localhost podman[104738]: 2026-02-23 08:56:14.175801384 +0000 UTC m=+0.254651333 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 23 03:56:14 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:56:37 localhost systemd[1]: tmp-crun.PIHRKO.mount: Deactivated successfully. Feb 23 03:56:37 localhost podman[104848]: 2026-02-23 08:56:37.056621479 +0000 UTC m=+0.115088882 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, distribution-scope=public) Feb 23 03:56:37 localhost podman[104877]: 2026-02-23 08:56:37.106399367 +0000 UTC m=+0.139547033 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:56:37 localhost podman[104877]: 2026-02-23 08:56:37.124674437 +0000 UTC m=+0.157822103 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1766032510, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Feb 23 03:56:37 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:56:37 localhost podman[104880]: 2026-02-23 08:56:37.172752573 +0000 UTC m=+0.205678603 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 23 03:56:37 localhost podman[104847]: 2026-02-23 08:56:37.212690588 +0000 UTC m=+0.275509645 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, version=17.1.13, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Feb 23 03:56:37 localhost podman[104860]: 2026-02-23 08:56:37.218793485 +0000 UTC m=+0.262853066 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, version=17.1.13, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, architecture=x86_64, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 03:56:37 localhost podman[104860]: 2026-02-23 08:56:37.253578852 +0000 UTC m=+0.297638433 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.13, container_name=iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, vcs-type=git, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 23 03:56:37 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:56:37 localhost podman[104846]: 2026-02-23 08:56:37.267586382 +0000 UTC m=+0.334975029 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, version=17.1.13, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:56:37 localhost podman[104847]: 2026-02-23 08:56:37.271902464 +0000 UTC m=+0.334721551 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, vcs-type=git) Feb 23 03:56:37 localhost podman[104880]: 2026-02-23 08:56:37.271689858 +0000 UTC m=+0.304615898 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, vcs-type=git) Feb 23 03:56:37 localhost podman[104847]: unhealthy Feb 23 03:56:37 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:56:37 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:56:37 localhost podman[104871]: 2026-02-23 08:56:37.315064048 +0000 UTC m=+0.353548088 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, release=1766032510, vcs-type=git) Feb 23 03:56:37 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:56:37 localhost podman[104854]: 2026-02-23 08:56:37.358698508 +0000 UTC m=+0.414558441 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:56:37 localhost podman[104846]: 2026-02-23 08:56:37.377929308 +0000 UTC m=+0.445317885 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 23 03:56:37 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:56:37 localhost podman[104871]: 2026-02-23 08:56:37.388609396 +0000 UTC m=+0.427093406 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com, distribution-scope=public, container_name=nova_compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 23 03:56:37 localhost podman[104848]: 2026-02-23 08:56:37.40376154 +0000 UTC m=+0.462228933 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, version=17.1.13, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Feb 23 03:56:37 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:56:37 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:56:37 localhost podman[104854]: 2026-02-23 08:56:37.442512369 +0000 UTC m=+0.498372272 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:56:37 localhost podman[104854]: unhealthy Feb 23 03:56:37 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:56:37 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:56:37 localhost podman[104873]: 2026-02-23 08:56:37.408504476 +0000 UTC m=+0.447359277 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 23 03:56:37 localhost podman[104873]: 2026-02-23 08:56:37.487275133 +0000 UTC m=+0.526129974 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.13) Feb 23 03:56:37 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:56:38 localhost systemd[1]: tmp-crun.YHiaHx.mount: Deactivated successfully. Feb 23 03:56:39 localhost sshd[105045]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:56:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:56:45 localhost podman[105047]: 2026-02-23 08:56:45.003737775 +0000 UTC m=+0.080998736 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, tcib_managed=true, release=1766032510, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, managed_by=tripleo_ansible) Feb 23 03:56:45 localhost podman[105047]: 2026-02-23 08:56:45.199750749 +0000 UTC m=+0.277011690 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vendor=Red Hat, Inc., config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=) Feb 23 03:56:45 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:57:05 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:57:05 localhost recover_tripleo_nova_virtqemud[105078]: 62457 Feb 23 03:57:05 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:57:05 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:57:08 localhost systemd[1]: tmp-crun.55ZSpf.mount: Deactivated successfully. Feb 23 03:57:08 localhost podman[105099]: 2026-02-23 08:57:08.030708354 +0000 UTC m=+0.083837474 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:57:08 localhost podman[105080]: 2026-02-23 08:57:08.062562571 +0000 UTC m=+0.129261657 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step4, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 23 03:57:08 localhost podman[105081]: 2026-02-23 08:57:08.071470484 +0000 UTC m=+0.134514848 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, architecture=x86_64, version=17.1.13, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target) Feb 23 03:57:08 localhost podman[105099]: 2026-02-23 08:57:08.079648685 +0000 UTC m=+0.132777805 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, version=17.1.13, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute) Feb 23 03:57:08 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:57:08 localhost podman[105093]: 2026-02-23 08:57:08.130893088 +0000 UTC m=+0.187731791 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13) Feb 23 03:57:08 localhost podman[105107]: 2026-02-23 08:57:08.133767136 +0000 UTC m=+0.171177994 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-cron-container, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z) Feb 23 03:57:08 localhost podman[105093]: 2026-02-23 08:57:08.139625646 +0000 UTC m=+0.196464379 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3) Feb 23 03:57:08 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:57:08 localhost podman[105085]: 2026-02-23 08:57:08.182975335 +0000 UTC m=+0.231222945 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 23 03:57:08 localhost podman[105115]: 2026-02-23 08:57:08.185371999 +0000 UTC m=+0.229997228 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:57:08 localhost podman[105107]: 2026-02-23 08:57:08.190084864 +0000 UTC m=+0.227495722 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:57:08 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:57:08 localhost podman[105119]: 2026-02-23 08:57:08.051253924 +0000 UTC m=+0.089947021 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, config_id=tripleo_step4, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.13, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 23 03:57:08 localhost podman[105079]: 2026-02-23 08:57:08.227610485 +0000 UTC m=+0.292021201 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, vcs-type=git, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:57:08 localhost podman[105085]: 2026-02-23 08:57:08.229750731 +0000 UTC m=+0.277998351 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 23 03:57:08 localhost podman[105085]: unhealthy Feb 23 03:57:08 localhost podman[105079]: 2026-02-23 08:57:08.236723954 +0000 UTC m=+0.301134720 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, release=1766032510, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true) Feb 23 03:57:08 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:57:08 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:57:08 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:57:08 localhost podman[105115]: 2026-02-23 08:57:08.258181273 +0000 UTC m=+0.302806512 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, architecture=x86_64, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, tcib_managed=true, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:57:08 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:57:08 localhost podman[105119]: 2026-02-23 08:57:08.281379704 +0000 UTC m=+0.320072861 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.13, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team) Feb 23 03:57:08 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:57:08 localhost podman[105080]: 2026-02-23 08:57:08.30468239 +0000 UTC m=+0.371381546 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64) Feb 23 03:57:08 localhost podman[105080]: unhealthy Feb 23 03:57:08 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:57:08 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:57:08 localhost podman[105081]: 2026-02-23 08:57:08.416890253 +0000 UTC m=+0.479934667 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=nova_migration_target, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:57:08 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:57:09 localhost systemd[1]: tmp-crun.bHfbHa.mount: Deactivated successfully. Feb 23 03:57:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:57:16 localhost podman[105266]: 2026-02-23 08:57:16.006344813 +0000 UTC m=+0.084207945 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510) Feb 23 03:57:16 localhost podman[105266]: 2026-02-23 08:57:16.204863734 +0000 UTC m=+0.282726786 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:57:16 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:57:26 localhost sshd[105370]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:57:39 localhost podman[105374]: 2026-02-23 08:57:39.032954185 +0000 UTC m=+0.090959411 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com) Feb 23 03:57:39 localhost podman[105373]: 2026-02-23 08:57:39.005416191 +0000 UTC m=+0.080571613 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, build-date=2026-01-12T22:36:40Z, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 03:57:39 localhost podman[105375]: 2026-02-23 08:57:39.062201973 +0000 UTC m=+0.131730823 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:57:39 localhost podman[105387]: 2026-02-23 08:57:39.016893203 +0000 UTC m=+0.077770487 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, container_name=nova_compute) Feb 23 03:57:39 localhost podman[105372]: 2026-02-23 08:57:39.124108013 +0000 UTC m=+0.199797961 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, version=17.1.13, name=rhosp-rhel9/openstack-collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Feb 23 03:57:39 localhost podman[105373]: 2026-02-23 08:57:39.137026569 +0000 UTC m=+0.212182041 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 23 03:57:39 localhost podman[105373]: unhealthy Feb 23 03:57:39 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:57:39 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:57:39 localhost podman[105372]: 2026-02-23 08:57:39.156630911 +0000 UTC m=+0.232320859 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:57:39 localhost podman[105404]: 2026-02-23 08:57:39.165262646 +0000 UTC m=+0.219483106 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:57:39 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:57:39 localhost podman[105385]: 2026-02-23 08:57:39.107148282 +0000 UTC m=+0.174906747 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, batch=17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public) Feb 23 03:57:39 localhost podman[105404]: 2026-02-23 08:57:39.214826566 +0000 UTC m=+0.269047006 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, distribution-scope=public) Feb 23 03:57:39 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:57:39 localhost podman[105403]: 2026-02-23 08:57:39.232420666 +0000 UTC m=+0.290593837 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Feb 23 03:57:39 localhost podman[105385]: 2026-02-23 08:57:39.241849185 +0000 UTC m=+0.309607660 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, io.buildah.version=1.41.5, batch=17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:57:39 localhost podman[105375]: 2026-02-23 08:57:39.251214132 +0000 UTC m=+0.320743052 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Feb 23 03:57:39 localhost podman[105387]: 2026-02-23 08:57:39.251641506 +0000 UTC m=+0.312518840 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, tcib_managed=true, batch=17.1_20260112.1, container_name=nova_compute) Feb 23 03:57:39 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:57:39 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Deactivated successfully. Feb 23 03:57:39 localhost podman[105403]: 2026-02-23 08:57:39.262366344 +0000 UTC m=+0.320539515 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.13, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 23 03:57:39 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:57:39 localhost podman[105375]: unhealthy Feb 23 03:57:39 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:57:39 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:57:39 localhost podman[105392]: 2026-02-23 08:57:39.048705429 +0000 UTC m=+0.095668086 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.13, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 03:57:39 localhost podman[105374]: 2026-02-23 08:57:39.384914365 +0000 UTC m=+0.442919611 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, url=https://www.redhat.com, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, container_name=nova_migration_target, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true) Feb 23 03:57:39 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:57:39 localhost podman[105392]: 2026-02-23 08:57:39.435306631 +0000 UTC m=+0.482269318 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.13, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 23 03:57:39 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:57:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:57:46 localhost systemd[1]: tmp-crun.KbuVwN.mount: Deactivated successfully. Feb 23 03:57:46 localhost podman[105563]: 2026-02-23 08:57:46.986704375 +0000 UTC m=+0.061607731 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.expose-services=) Feb 23 03:57:47 localhost podman[105563]: 2026-02-23 08:57:47.186786684 +0000 UTC m=+0.261690070 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=metrics_qdr, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true) Feb 23 03:57:47 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:57:53 localhost sshd[105591]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:57:53 localhost sshd[105593]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:57:58 localhost sshd[105594]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:58:08 localhost sshd[105596]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:58:10 localhost systemd[1]: tmp-crun.fSvFyZ.mount: Deactivated successfully. Feb 23 03:58:10 localhost podman[105600]: 2026-02-23 08:58:10.075331355 +0000 UTC m=+0.144886817 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute) Feb 23 03:58:10 localhost podman[105631]: 2026-02-23 08:58:10.034213313 +0000 UTC m=+0.081489911 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 23 03:58:10 localhost podman[105622]: 2026-02-23 08:58:10.016339554 +0000 UTC m=+0.071539856 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4) Feb 23 03:58:10 localhost podman[105598]: 2026-02-23 08:58:10.085985841 +0000 UTC m=+0.154897893 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Feb 23 03:58:10 localhost podman[105622]: 2026-02-23 08:58:10.151719098 +0000 UTC m=+0.206919410 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team) Feb 23 03:58:10 localhost podman[105598]: 2026-02-23 08:58:10.171900597 +0000 UTC m=+0.240812709 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true) Feb 23 03:58:10 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:58:10 localhost podman[105620]: 2026-02-23 08:58:10.137965196 +0000 UTC m=+0.188817415 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, container_name=logrotate_crond, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 23 03:58:10 localhost podman[105631]: 2026-02-23 08:58:10.214650479 +0000 UTC m=+0.261927137 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com) Feb 23 03:58:10 localhost podman[105620]: 2026-02-23 08:58:10.221812819 +0000 UTC m=+0.272665028 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, vcs-type=git, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, architecture=x86_64, tcib_managed=true) Feb 23 03:58:10 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:58:10 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:58:10 localhost podman[105611]: 2026-02-23 08:58:10.122538252 +0000 UTC m=+0.180500158 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, version=17.1.13, distribution-scope=public, tcib_managed=true, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.5, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team) Feb 23 03:58:10 localhost podman[105599]: 2026-02-23 08:58:10.178050196 +0000 UTC m=+0.248684322 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1766032510, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 03:58:10 localhost podman[105611]: 2026-02-23 08:58:10.305906279 +0000 UTC m=+0.363868215 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, container_name=nova_compute, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5) Feb 23 03:58:10 localhost podman[105611]: unhealthy Feb 23 03:58:10 localhost podman[105599]: 2026-02-23 08:58:10.313948366 +0000 UTC m=+0.384582532 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:58:10 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:58:10 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 03:58:10 localhost podman[105599]: unhealthy Feb 23 03:58:10 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:58:10 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:58:10 localhost podman[105601]: 2026-02-23 08:58:10.228673369 +0000 UTC m=+0.286204092 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 23 03:58:10 localhost podman[105601]: 2026-02-23 08:58:10.363669581 +0000 UTC m=+0.421200284 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com) Feb 23 03:58:10 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:58:10 localhost podman[105601]: unhealthy Feb 23 03:58:10 localhost podman[105602]: 2026-02-23 08:58:10.280968334 +0000 UTC m=+0.340872030 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, release=1766032510, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container) Feb 23 03:58:10 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:58:10 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:58:10 localhost podman[105602]: 2026-02-23 08:58:10.413935823 +0000 UTC m=+0.473839549 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=iscsid, release=1766032510, version=17.1.13) Feb 23 03:58:10 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:58:10 localhost podman[105600]: 2026-02-23 08:58:10.455858259 +0000 UTC m=+0.525413821 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5) Feb 23 03:58:10 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:58:13 localhost sshd[105787]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:58:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:58:18 localhost podman[105789]: 2026-02-23 08:58:18.003317252 +0000 UTC m=+0.080632535 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 03:58:18 localhost podman[105789]: 2026-02-23 08:58:18.201610416 +0000 UTC m=+0.278925759 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:58:18 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:58:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22243 DF PROTO=TCP SPT=37794 DPT=9105 SEQ=225215890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0107E6230000000001030307) Feb 23 03:58:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22244 DF PROTO=TCP SPT=37794 DPT=9105 SEQ=225215890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0107EA430000000001030307) Feb 23 03:58:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22245 DF PROTO=TCP SPT=37794 DPT=9105 SEQ=225215890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0107F2440000000001030307) Feb 23 03:58:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22246 DF PROTO=TCP SPT=37794 DPT=9105 SEQ=225215890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010802040000000001030307) Feb 23 03:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:58:41 localhost systemd[1]: tmp-crun.MoUa6F.mount: Deactivated successfully. Feb 23 03:58:41 localhost podman[105896]: 2026-02-23 08:58:41.020228973 +0000 UTC m=+0.090678574 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, url=https://www.redhat.com, config_id=tripleo_step3, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.buildah.version=1.41.5) Feb 23 03:58:41 localhost podman[105923]: 2026-02-23 08:58:41.078411708 +0000 UTC m=+0.121913632 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 23 03:58:41 localhost podman[105935]: 2026-02-23 08:58:41.037331087 +0000 UTC m=+0.076727576 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:58:41 localhost podman[105923]: 2026-02-23 08:58:41.093995826 +0000 UTC m=+0.137497750 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:58:41 localhost podman[105896]: 2026-02-23 08:58:41.099892776 +0000 UTC m=+0.170342357 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-collectd-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Feb 23 03:58:41 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:58:41 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:58:41 localhost podman[105897]: 2026-02-23 08:58:41.134562121 +0000 UTC m=+0.202658950 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, batch=17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 23 03:58:41 localhost podman[105914]: 2026-02-23 08:58:41.176341622 +0000 UTC m=+0.225516340 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1) Feb 23 03:58:41 localhost podman[105907]: 2026-02-23 08:58:41.188733203 +0000 UTC m=+0.250331432 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, version=17.1.13, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20260112.1) Feb 23 03:58:41 localhost podman[105899]: 2026-02-23 08:58:41.064522202 +0000 UTC m=+0.116983391 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1766032510, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com) Feb 23 03:58:41 localhost podman[105935]: 2026-02-23 08:58:41.223087866 +0000 UTC m=+0.262484345 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.13, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Feb 23 03:58:41 localhost podman[105897]: 2026-02-23 08:58:41.223356084 +0000 UTC m=+0.291452924 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible) Feb 23 03:58:41 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:58:41 localhost podman[105898]: 2026-02-23 08:58:41.296715425 +0000 UTC m=+0.361809662 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, vcs-type=git) Feb 23 03:58:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4978 DF PROTO=TCP SPT=52006 DPT=9101 SEQ=48917992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01080A8F0000000001030307) Feb 23 03:58:41 localhost podman[105914]: 2026-02-23 08:58:41.318884325 +0000 UTC m=+0.368059043 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, architecture=x86_64, config_id=tripleo_step5, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 03:58:41 localhost podman[105914]: unhealthy Feb 23 03:58:41 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:58:41 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 03:58:41 localhost podman[105899]: 2026-02-23 08:58:41.348510795 +0000 UTC m=+0.400971954 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:58:41 localhost podman[105899]: unhealthy Feb 23 03:58:41 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:58:41 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:58:41 localhost podman[105918]: 2026-02-23 08:58:41.411490957 +0000 UTC m=+0.461786979 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z) Feb 23 03:58:41 localhost podman[105897]: unhealthy Feb 23 03:58:41 localhost podman[105918]: 2026-02-23 08:58:41.442314242 +0000 UTC m=+0.492610264 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, release=1766032510, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, version=17.1.13, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, batch=17.1_20260112.1, url=https://www.redhat.com) Feb 23 03:58:41 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:58:41 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:58:41 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:58:41 localhost podman[105907]: 2026-02-23 08:58:41.48782895 +0000 UTC m=+0.549427249 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Feb 23 03:58:41 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:58:41 localhost podman[105898]: 2026-02-23 08:58:41.664788459 +0000 UTC m=+0.729882656 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T23:32:04Z, architecture=x86_64, release=1766032510, managed_by=tripleo_ansible) Feb 23 03:58:41 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:58:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53180 DF PROTO=TCP SPT=32772 DPT=9102 SEQ=3625861167 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01080E450000000001030307) Feb 23 03:58:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4979 DF PROTO=TCP SPT=52006 DPT=9101 SEQ=48917992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01080E830000000001030307) Feb 23 03:58:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65396 DF PROTO=TCP SPT=57372 DPT=9882 SEQ=1338499586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0108100E0000000001030307) Feb 23 03:58:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53181 DF PROTO=TCP SPT=32772 DPT=9102 SEQ=3625861167 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010812430000000001030307) Feb 23 03:58:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65397 DF PROTO=TCP SPT=57372 DPT=9882 SEQ=1338499586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010814030000000001030307) Feb 23 03:58:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4980 DF PROTO=TCP SPT=52006 DPT=9101 SEQ=48917992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010816840000000001030307) Feb 23 03:58:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53182 DF PROTO=TCP SPT=32772 DPT=9102 SEQ=3625861167 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01081A430000000001030307) Feb 23 03:58:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65398 DF PROTO=TCP SPT=57372 DPT=9882 SEQ=1338499586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01081C030000000001030307) Feb 23 03:58:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22247 DF PROTO=TCP SPT=37794 DPT=9105 SEQ=225215890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010821840000000001030307) Feb 23 03:58:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4981 DF PROTO=TCP SPT=52006 DPT=9101 SEQ=48917992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010826430000000001030307) Feb 23 03:58:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:58:49 localhost podman[106088]: 2026-02-23 08:58:48.999608727 +0000 UTC m=+0.072380332 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=metrics_qdr, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:58:49 localhost podman[106088]: 2026-02-23 08:58:49.210807468 +0000 UTC m=+0.283579173 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, architecture=x86_64, version=17.1.13) Feb 23 03:58:49 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:58:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53183 DF PROTO=TCP SPT=32772 DPT=9102 SEQ=3625861167 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01082A030000000001030307) Feb 23 03:58:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65399 DF PROTO=TCP SPT=57372 DPT=9882 SEQ=1338499586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01082BC30000000001030307) Feb 23 03:58:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57051 DF PROTO=TCP SPT=37024 DPT=9100 SEQ=2070502525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010830CE0000000001030307) Feb 23 03:58:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57052 DF PROTO=TCP SPT=37024 DPT=9100 SEQ=2070502525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010834C40000000001030307) Feb 23 03:58:53 localhost sshd[106117]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:58:53 localhost systemd-logind[759]: New session 35 of user zuul. Feb 23 03:58:53 localhost systemd[1]: Started Session 35 of User zuul. Feb 23 03:58:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57053 DF PROTO=TCP SPT=37024 DPT=9100 SEQ=2070502525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01083CC30000000001030307) Feb 23 03:58:54 localhost python3.9[106212]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 03:58:55 localhost python3.9[106306]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 03:58:56 localhost python3.9[106399]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 03:58:56 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 03:58:56 localhost recover_tripleo_nova_virtqemud[106403]: 62457 Feb 23 03:58:56 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 03:58:56 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 03:58:56 localhost python3.9[106495]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 03:58:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4982 DF PROTO=TCP SPT=52006 DPT=9101 SEQ=48917992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010847830000000001030307) Feb 23 03:58:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53184 DF PROTO=TCP SPT=32772 DPT=9102 SEQ=3625861167 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010849840000000001030307) Feb 23 03:58:57 localhost python3.9[106588]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 03:58:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65400 DF PROTO=TCP SPT=57372 DPT=9882 SEQ=1338499586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01084B830000000001030307) Feb 23 03:58:58 localhost python3.9[106679]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Feb 23 03:58:59 localhost python3.9[106769]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 03:59:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:59:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5036 writes, 22K keys, 5036 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5036 writes, 634 syncs, 7.94 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:59:00 localhost python3.9[106861]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Feb 23 03:59:00 localhost sshd[106876]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:59:01 localhost python3.9[106953]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 03:59:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35199 DF PROTO=TCP SPT=44342 DPT=9105 SEQ=1454159880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01085B540000000001030307) Feb 23 03:59:02 localhost python3.9[107001]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 03:59:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35200 DF PROTO=TCP SPT=44342 DPT=9105 SEQ=1454159880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01085F430000000001030307) Feb 23 03:59:03 localhost systemd[1]: session-35.scope: Deactivated successfully. Feb 23 03:59:03 localhost systemd[1]: session-35.scope: Consumed 4.600s CPU time. Feb 23 03:59:03 localhost systemd-logind[759]: Session 35 logged out. Waiting for processes to exit. Feb 23 03:59:03 localhost systemd-logind[759]: Removed session 35. Feb 23 03:59:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 03:59:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5650 writes, 24K keys, 5650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5650 writes, 811 syncs, 6.97 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 03:59:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35201 DF PROTO=TCP SPT=44342 DPT=9105 SEQ=1454159880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010867430000000001030307) Feb 23 03:59:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35202 DF PROTO=TCP SPT=44342 DPT=9105 SEQ=1454159880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010877040000000001030307) Feb 23 03:59:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56213 DF PROTO=TCP SPT=53690 DPT=9101 SEQ=2520753209 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01087FC00000000001030307) Feb 23 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:59:12 localhost podman[107018]: 2026-02-23 08:59:12.014454794 +0000 UTC m=+0.083477612 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, distribution-scope=public, io.buildah.version=1.41.5, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 03:59:12 localhost podman[107018]: 2026-02-23 08:59:12.021116358 +0000 UTC m=+0.090139196 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 03:59:12 localhost podman[107018]: unhealthy Feb 23 03:59:12 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:59:12 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:59:12 localhost systemd[1]: tmp-crun.3jxTKb.mount: Deactivated successfully. Feb 23 03:59:12 localhost podman[107017]: 2026-02-23 08:59:12.0589782 +0000 UTC m=+0.132477236 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., release=1766032510, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, version=17.1.13, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 03:59:12 localhost podman[107021]: 2026-02-23 08:59:12.126190591 +0000 UTC m=+0.188484843 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z) Feb 23 03:59:12 localhost podman[107042]: 2026-02-23 08:59:12.08245113 +0000 UTC m=+0.134906880 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, distribution-scope=public) Feb 23 03:59:12 localhost podman[107017]: 2026-02-23 08:59:12.137451748 +0000 UTC m=+0.210950824 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 23 03:59:12 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:59:12 localhost podman[107032]: 2026-02-23 08:59:12.102100033 +0000 UTC m=+0.163008352 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5) Feb 23 03:59:12 localhost podman[107021]: 2026-02-23 08:59:12.161778864 +0000 UTC m=+0.224073106 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z) Feb 23 03:59:12 localhost podman[107021]: unhealthy Feb 23 03:59:12 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:59:12 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:59:12 localhost podman[107032]: 2026-02-23 08:59:12.181192699 +0000 UTC m=+0.242100988 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Feb 23 03:59:12 localhost podman[107032]: unhealthy Feb 23 03:59:12 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:59:12 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 03:59:12 localhost podman[107019]: 2026-02-23 08:59:12.141642586 +0000 UTC m=+0.207887569 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:59:12 localhost podman[107057]: 2026-02-23 08:59:12.101912277 +0000 UTC m=+0.145781894 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public) Feb 23 03:59:12 localhost podman[107042]: 2026-02-23 08:59:12.212744797 +0000 UTC m=+0.265200547 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, version=17.1.13, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true) Feb 23 03:59:12 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:59:12 localhost podman[107026]: 2026-02-23 08:59:12.176532856 +0000 UTC m=+0.233590598 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, container_name=iscsid, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-01-12T22:34:43Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1) Feb 23 03:59:12 localhost podman[107026]: 2026-02-23 08:59:12.255462919 +0000 UTC m=+0.312520701 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible) Feb 23 03:59:12 localhost podman[107051]: 2026-02-23 08:59:12.262200505 +0000 UTC m=+0.311436746 container health_status a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.buildah.version=1.41.5, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, tcib_managed=true, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, architecture=x86_64) Feb 23 03:59:12 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:59:12 localhost podman[107051]: 2026-02-23 08:59:12.280124165 +0000 UTC m=+0.329360396 container exec_died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, build-date=2026-01-12T23:07:47Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 03:59:12 localhost podman[107057]: 2026-02-23 08:59:12.284885551 +0000 UTC m=+0.328755248 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public) Feb 23 03:59:12 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Deactivated successfully. Feb 23 03:59:12 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:59:12 localhost podman[107019]: 2026-02-23 08:59:12.537735839 +0000 UTC m=+0.603980872 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:59:12 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:59:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65401 DF PROTO=TCP SPT=57372 DPT=9882 SEQ=1338499586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01088B830000000001030307) Feb 23 03:59:16 localhost sshd[107207]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:59:16 localhost systemd-logind[759]: New session 36 of user zuul. Feb 23 03:59:16 localhost systemd[1]: Started Session 36 of User zuul. Feb 23 03:59:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35203 DF PROTO=TCP SPT=44342 DPT=9105 SEQ=1454159880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010897840000000001030307) Feb 23 03:59:17 localhost python3.9[107302]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 03:59:17 localhost systemd[1]: Reloading. Feb 23 03:59:18 localhost systemd-sysv-generator[107327]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:59:18 localhost systemd-rc-local-generator[107322]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:59:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:59:19 localhost python3.9[107428]: ansible-ansible.builtin.service_facts Invoked Feb 23 03:59:19 localhost network[107445]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 03:59:19 localhost network[107446]: 'network-scripts' will be removed from distribution in near future. Feb 23 03:59:19 localhost network[107447]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 03:59:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:59:19 localhost podman[107453]: 2026-02-23 08:59:19.450578601 +0000 UTC m=+0.078577582 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team) Feb 23 03:59:19 localhost podman[107453]: 2026-02-23 08:59:19.680925179 +0000 UTC m=+0.308924150 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, release=1766032510, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 23 03:59:19 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:59:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:59:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40936 DF PROTO=TCP SPT=40366 DPT=9100 SEQ=4198063825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0108A5FF0000000001030307) Feb 23 03:59:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57056 DF PROTO=TCP SPT=37024 DPT=9100 SEQ=2070502525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0108AD830000000001030307) Feb 23 03:59:23 localhost python3.9[107749]: ansible-ansible.builtin.service_facts Invoked Feb 23 03:59:23 localhost network[107766]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 03:59:23 localhost network[107767]: 'network-scripts' will be removed from distribution in near future. Feb 23 03:59:23 localhost network[107768]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 03:59:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:59:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56217 DF PROTO=TCP SPT=53690 DPT=9101 SEQ=2520753209 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0108BB830000000001030307) Feb 23 03:59:29 localhost python3.9[107968]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 03:59:29 localhost systemd[1]: Reloading. Feb 23 03:59:29 localhost systemd-sysv-generator[107995]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 03:59:29 localhost systemd-rc-local-generator[107991]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 03:59:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 03:59:29 localhost systemd[1]: Stopping ceilometer_agent_compute container... Feb 23 03:59:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44346 DF PROTO=TCP SPT=59030 DPT=9105 SEQ=2835184233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0108D0830000000001030307) Feb 23 03:59:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44347 DF PROTO=TCP SPT=59030 DPT=9105 SEQ=2835184233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0108D4840000000001030307) Feb 23 03:59:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44348 DF PROTO=TCP SPT=59030 DPT=9105 SEQ=2835184233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0108DC830000000001030307) Feb 23 03:59:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44349 DF PROTO=TCP SPT=59030 DPT=9105 SEQ=2835184233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0108EC430000000001030307) Feb 23 03:59:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16506 DF PROTO=TCP SPT=60356 DPT=9101 SEQ=4293518515 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0108F4F00000000001030307) Feb 23 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 03:59:42 localhost podman[108023]: 2026-02-23 08:59:42.28264055 +0000 UTC m=+0.105330002 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1) Feb 23 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 03:59:42 localhost podman[108023]: 2026-02-23 08:59:42.322513014 +0000 UTC m=+0.145202426 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, container_name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, config_id=tripleo_step3, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510) Feb 23 03:59:42 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 03:59:42 localhost podman[108072]: Error: container a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 is not running Feb 23 03:59:42 localhost podman[108025]: 2026-02-23 08:59:42.335498832 +0000 UTC m=+0.148499847 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 23 03:59:42 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Main process exited, code=exited, status=125/n/a Feb 23 03:59:42 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Failed with result 'exit-code'. Feb 23 03:59:42 localhost podman[108066]: 2026-02-23 08:59:42.464040586 +0000 UTC m=+0.159858756 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 23 03:59:42 localhost podman[108025]: 2026-02-23 08:59:42.470824384 +0000 UTC m=+0.283825369 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 03:59:42 localhost podman[108025]: unhealthy Feb 23 03:59:42 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:59:42 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 03:59:42 localhost podman[108024]: 2026-02-23 08:59:42.38170563 +0000 UTC m=+0.200915355 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z) Feb 23 03:59:42 localhost podman[108083]: 2026-02-23 08:59:42.413902488 +0000 UTC m=+0.094313005 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 03:59:42 localhost podman[108063]: 2026-02-23 08:59:42.52122171 +0000 UTC m=+0.222538568 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, release=1766032510, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 23 03:59:42 localhost podman[108066]: 2026-02-23 08:59:42.526006897 +0000 UTC m=+0.221825077 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2026-01-12T22:10:15Z, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, vcs-type=git) Feb 23 03:59:42 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 03:59:42 localhost podman[108083]: 2026-02-23 08:59:42.546808146 +0000 UTC m=+0.227218653 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z) Feb 23 03:59:42 localhost podman[108065]: 2026-02-23 08:59:42.444693752 +0000 UTC m=+0.143838453 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20260112.1, container_name=nova_compute, distribution-scope=public, io.buildah.version=1.41.5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, managed_by=tripleo_ansible) Feb 23 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 03:59:42 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 03:59:42 localhost podman[108024]: 2026-02-23 08:59:42.565844859 +0000 UTC m=+0.385054584 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, container_name=ovn_controller, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13) Feb 23 03:59:42 localhost podman[108024]: unhealthy Feb 23 03:59:42 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:59:42 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 03:59:42 localhost podman[108063]: 2026-02-23 08:59:42.585909116 +0000 UTC m=+0.287225984 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Feb 23 03:59:42 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 03:59:42 localhost podman[108179]: 2026-02-23 08:59:42.625272194 +0000 UTC m=+0.058879518 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 03:59:42 localhost podman[108065]: 2026-02-23 08:59:42.681266902 +0000 UTC m=+0.380411583 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T23:32:04Z, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step5) Feb 23 03:59:42 localhost podman[108065]: unhealthy Feb 23 03:59:42 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 03:59:42 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 03:59:43 localhost podman[108179]: 2026-02-23 08:59:43.023066399 +0000 UTC m=+0.456673803 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1) Feb 23 03:59:43 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 03:59:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55118 DF PROTO=TCP SPT=33340 DPT=9102 SEQ=359030552 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0108FF830000000001030307) Feb 23 03:59:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44350 DF PROTO=TCP SPT=59030 DPT=9105 SEQ=2835184233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01090D830000000001030307) Feb 23 03:59:49 localhost sshd[108201]: main: sshd: ssh-rsa algorithm is disabled Feb 23 03:59:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 03:59:50 localhost podman[108203]: 2026-02-23 08:59:50.004789362 +0000 UTC m=+0.089883339 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 23 03:59:50 localhost podman[108203]: 2026-02-23 08:59:50.223858644 +0000 UTC m=+0.308952631 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, version=17.1.13, architecture=x86_64) Feb 23 03:59:50 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 03:59:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47225 DF PROTO=TCP SPT=40202 DPT=9100 SEQ=4057628879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01091B2E0000000001030307) Feb 23 03:59:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47227 DF PROTO=TCP SPT=40202 DPT=9100 SEQ=4057628879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010927430000000001030307) Feb 23 03:59:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16510 DF PROTO=TCP SPT=60356 DPT=9101 SEQ=4293518515 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010931830000000001030307) Feb 23 04:00:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5827 DF PROTO=TCP SPT=47048 DPT=9105 SEQ=1498336121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010945B40000000001030307) Feb 23 04:00:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5828 DF PROTO=TCP SPT=47048 DPT=9105 SEQ=1498336121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010949C30000000001030307) Feb 23 04:00:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5829 DF PROTO=TCP SPT=47048 DPT=9105 SEQ=1498336121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010951C30000000001030307) Feb 23 04:00:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5830 DF PROTO=TCP SPT=47048 DPT=9105 SEQ=1498336121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010961830000000001030307) Feb 23 04:00:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58807 DF PROTO=TCP SPT=53674 DPT=9101 SEQ=2678030097 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01096A200000000001030307) Feb 23 04:00:12 localhost podman[108009]: time="2026-02-23T09:00:12Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Feb 23 04:00:12 localhost systemd[1]: libpod-a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.scope: Deactivated successfully. Feb 23 04:00:12 localhost systemd[1]: libpod-a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.scope: Consumed 5.075s CPU time. Feb 23 04:00:12 localhost podman[108009]: 2026-02-23 09:00:12.039489383 +0000 UTC m=+42.108488369 container died a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 23 04:00:12 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.timer: Deactivated successfully. Feb 23 04:00:12 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17. Feb 23 04:00:12 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Failed to open /run/systemd/transient/a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: No such file or directory Feb 23 04:00:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17-userdata-shm.mount: Deactivated successfully. Feb 23 04:00:12 localhost podman[108009]: 2026-02-23 09:00:12.094847002 +0000 UTC m=+42.163845968 container cleanup a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, release=1766032510, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 23 04:00:12 localhost podman[108009]: ceilometer_agent_compute Feb 23 04:00:12 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.timer: Failed to open /run/systemd/transient/a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.timer: No such file or directory Feb 23 04:00:12 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Failed to open /run/systemd/transient/a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: No such file or directory Feb 23 04:00:12 localhost podman[108237]: 2026-02-23 09:00:12.130765134 +0000 UTC m=+0.079519400 container cleanup a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 04:00:12 localhost systemd[1]: libpod-conmon-a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.scope: Deactivated successfully. Feb 23 04:00:12 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.timer: Failed to open /run/systemd/transient/a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.timer: No such file or directory Feb 23 04:00:12 localhost systemd[1]: a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: Failed to open /run/systemd/transient/a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17.service: No such file or directory Feb 23 04:00:12 localhost podman[108252]: 2026-02-23 09:00:12.212563823 +0000 UTC m=+0.051516451 container cleanup a5e0d9dd868c9b81ae66df9617eb3b68aabf72a94b0dbe58e42e14afa2a0ac17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 23 04:00:12 localhost podman[108252]: ceilometer_agent_compute Feb 23 04:00:12 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Feb 23 04:00:12 localhost systemd[1]: Stopped ceilometer_agent_compute container. Feb 23 04:00:12 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.058s CPU time, no IO. Feb 23 04:00:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 04:00:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 04:00:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 04:00:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 04:00:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 04:00:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 04:00:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 04:00:12 localhost podman[108358]: 2026-02-23 09:00:12.843552203 +0000 UTC m=+0.111932994 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, config_id=tripleo_step4, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.) Feb 23 04:00:12 localhost podman[108369]: 2026-02-23 09:00:12.848913688 +0000 UTC m=+0.103423514 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, version=17.1.13, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.5, managed_by=tripleo_ansible, url=https://www.redhat.com) Feb 23 04:00:12 localhost podman[108358]: 2026-02-23 09:00:12.855907473 +0000 UTC m=+0.124288274 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.) Feb 23 04:00:12 localhost podman[108358]: unhealthy Feb 23 04:00:12 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:00:12 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 04:00:12 localhost podman[108369]: 2026-02-23 09:00:12.874947977 +0000 UTC m=+0.129457823 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team) Feb 23 04:00:12 localhost podman[108369]: unhealthy Feb 23 04:00:12 localhost podman[108382]: 2026-02-23 09:00:12.933107591 +0000 UTC m=+0.180952812 container health_status ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 23 04:00:12 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:00:12 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 04:00:12 localhost podman[108357]: 2026-02-23 09:00:12.948234146 +0000 UTC m=+0.220527578 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3) Feb 23 04:00:12 localhost podman[108357]: 2026-02-23 09:00:12.986994325 +0000 UTC m=+0.259287727 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-collectd-container) Feb 23 04:00:13 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 04:00:13 localhost systemd[1]: var-lib-containers-storage-overlay-143338a49c9acf9635b1502d1345d3ee14aaf761c2ec280ecc67cd9c1fe8adb0-merged.mount: Deactivated successfully. Feb 23 04:00:13 localhost podman[108382]: 2026-02-23 09:00:13.040016882 +0000 UTC m=+0.287862073 container exec_died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 23 04:00:13 localhost python3.9[108356]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 04:00:13 localhost systemd[1]: tmp-crun.bBZUXN.mount: Deactivated successfully. Feb 23 04:00:13 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Deactivated successfully. Feb 23 04:00:13 localhost podman[108373]: 2026-02-23 09:00:13.074652415 +0000 UTC m=+0.322499306 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 04:00:13 localhost podman[108359]: 2026-02-23 09:00:12.991143102 +0000 UTC m=+0.248641530 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 23 04:00:13 localhost podman[108373]: 2026-02-23 09:00:13.1047975 +0000 UTC m=+0.352644401 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., release=1766032510, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 23 04:00:13 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 04:00:13 localhost podman[108359]: 2026-02-23 09:00:13.124756342 +0000 UTC m=+0.382254720 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, version=17.1.13, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team) Feb 23 04:00:13 localhost systemd[1]: Reloading. Feb 23 04:00:13 localhost podman[108359]: unhealthy Feb 23 04:00:13 localhost podman[108492]: 2026-02-23 09:00:13.182518384 +0000 UTC m=+0.111198212 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container) Feb 23 04:00:13 localhost podman[108365]: 2026-02-23 09:00:13.199624599 +0000 UTC m=+0.462416338 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z) Feb 23 04:00:13 localhost systemd-rc-local-generator[108542]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:00:13 localhost systemd-sysv-generator[108545]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:00:13 localhost podman[108365]: 2026-02-23 09:00:13.236844441 +0000 UTC m=+0.499636170 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, distribution-scope=public, vendor=Red Hat, Inc.) Feb 23 04:00:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:00:13 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:00:13 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 04:00:13 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 04:00:13 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Feb 23 04:00:13 localhost podman[108492]: 2026-02-23 09:00:13.616757707 +0000 UTC m=+0.545437475 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 04:00:13 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 04:00:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18318 DF PROTO=TCP SPT=39262 DPT=9102 SEQ=2261085617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010975830000000001030307) Feb 23 04:00:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5831 DF PROTO=TCP SPT=47048 DPT=9105 SEQ=1498336121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010981840000000001030307) Feb 23 04:00:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 04:00:20 localhost podman[108576]: 2026-02-23 09:00:20.756094528 +0000 UTC m=+0.086686811 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, batch=17.1_20260112.1, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, container_name=metrics_qdr, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 04:00:20 localhost podman[108576]: 2026-02-23 09:00:20.962611345 +0000 UTC m=+0.293203578 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, container_name=metrics_qdr, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.13, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 04:00:20 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 04:00:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22987 DF PROTO=TCP SPT=49188 DPT=9100 SEQ=2777107145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0109905F0000000001030307) Feb 23 04:00:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22989 DF PROTO=TCP SPT=49188 DPT=9100 SEQ=2777107145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01099C830000000001030307) Feb 23 04:00:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58811 DF PROTO=TCP SPT=53674 DPT=9101 SEQ=2678030097 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0109A5830000000001030307) Feb 23 04:00:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39165 DF PROTO=TCP SPT=42330 DPT=9105 SEQ=795543958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0109BAE40000000001030307) Feb 23 04:00:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39166 DF PROTO=TCP SPT=42330 DPT=9105 SEQ=795543958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0109BF030000000001030307) Feb 23 04:00:33 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 04:00:33 localhost recover_tripleo_nova_virtqemud[108681]: 62457 Feb 23 04:00:33 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 04:00:33 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 04:00:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39167 DF PROTO=TCP SPT=42330 DPT=9105 SEQ=795543958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0109C7030000000001030307) Feb 23 04:00:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:f0:86:db MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=43006 SEQ=3397907381 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Feb 23 04:00:39 localhost sshd[108682]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:00:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46636 DF PROTO=TCP SPT=57502 DPT=9101 SEQ=277252045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0109DF520000000001030307) Feb 23 04:00:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 04:00:43 localhost systemd[1]: tmp-crun.7BUUF9.mount: Deactivated successfully. Feb 23 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 04:00:43 localhost podman[108684]: 2026-02-23 09:00:43.034893243 +0000 UTC m=+0.110628646 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.openshift.expose-services=, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true) Feb 23 04:00:43 localhost podman[108684]: 2026-02-23 09:00:43.055709321 +0000 UTC m=+0.131444654 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 04:00:43 localhost podman[108684]: unhealthy Feb 23 04:00:43 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:00:43 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 04:00:43 localhost podman[108701]: 2026-02-23 09:00:43.135641344 +0000 UTC m=+0.094181741 container health_status 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.13, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 23 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 04:00:43 localhost podman[108723]: Error: container ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 is not running Feb 23 04:00:43 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Main process exited, code=exited, status=125/n/a Feb 23 04:00:43 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Failed with result 'exit-code'. Feb 23 04:00:43 localhost podman[108701]: 2026-02-23 09:00:43.170243485 +0000 UTC m=+0.128783922 container exec_died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.buildah.version=1.41.5) Feb 23 04:00:43 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Deactivated successfully. Feb 23 04:00:43 localhost podman[108703]: 2026-02-23 09:00:43.231632879 +0000 UTC m=+0.185425290 container health_status 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, url=https://www.redhat.com, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1) Feb 23 04:00:43 localhost podman[108703]: 2026-02-23 09:00:43.24634547 +0000 UTC m=+0.200137901 container exec_died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=nova_compute) Feb 23 04:00:43 localhost podman[108703]: unhealthy Feb 23 04:00:43 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:00:43 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 04:00:43 localhost podman[108745]: 2026-02-23 09:00:43.206385085 +0000 UTC m=+0.055705391 container health_status 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, release=1766032510, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 23 04:00:43 localhost podman[108745]: 2026-02-23 09:00:43.285162092 +0000 UTC m=+0.134482378 container exec_died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond) Feb 23 04:00:43 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Deactivated successfully. Feb 23 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 04:00:44 localhost podman[108776]: 2026-02-23 09:00:44.004449451 +0000 UTC m=+0.080505732 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Feb 23 04:00:44 localhost podman[108776]: 2026-02-23 09:00:44.040013972 +0000 UTC m=+0.116070203 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, distribution-scope=public) Feb 23 04:00:44 localhost podman[108776]: unhealthy Feb 23 04:00:44 localhost systemd[1]: tmp-crun.lXoURA.mount: Deactivated successfully. Feb 23 04:00:44 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:00:44 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 04:00:44 localhost podman[108775]: 2026-02-23 09:00:44.055436755 +0000 UTC m=+0.133883909 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, version=17.1.13, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2026-01-12T23:32:04Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute) Feb 23 04:00:44 localhost podman[108777]: 2026-02-23 09:00:44.107909425 +0000 UTC m=+0.177794737 container health_status 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510) Feb 23 04:00:44 localhost podman[108777]: 2026-02-23 09:00:44.141389602 +0000 UTC m=+0.211274844 container exec_died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, com.redhat.component=openstack-iscsid-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 04:00:44 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Deactivated successfully. Feb 23 04:00:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46638 DF PROTO=TCP SPT=57502 DPT=9101 SEQ=277252045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0109EB430000000001030307) Feb 23 04:00:44 localhost podman[108775]: 2026-02-23 09:00:44.41673428 +0000 UTC m=+0.495181494 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20260112.1) Feb 23 04:00:44 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 04:00:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47873 DF PROTO=TCP SPT=55980 DPT=9882 SEQ=1935494154 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0109F5830000000001030307) Feb 23 04:00:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15197 DF PROTO=TCP SPT=51790 DPT=9100 SEQ=3590629944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010A058F0000000001030307) Feb 23 04:00:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 04:00:51 localhost podman[108836]: 2026-02-23 09:00:51.247622185 +0000 UTC m=+0.073053282 container health_status 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_id=tripleo_step1, version=17.1.13, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 04:00:51 localhost podman[108836]: 2026-02-23 09:00:51.495309675 +0000 UTC m=+0.320740772 container exec_died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, distribution-scope=public, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, tcib_managed=true, managed_by=tripleo_ansible) Feb 23 04:00:51 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Deactivated successfully. Feb 23 04:00:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22992 DF PROTO=TCP SPT=49188 DPT=9100 SEQ=2777107145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010A0D840000000001030307) Feb 23 04:00:55 localhost podman[108561]: time="2026-02-23T09:00:55Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Feb 23 04:00:55 localhost systemd[1]: libpod-ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.scope: Deactivated successfully. Feb 23 04:00:55 localhost systemd[1]: libpod-ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.scope: Consumed 5.886s CPU time. Feb 23 04:00:55 localhost podman[108561]: 2026-02-23 09:00:55.620768552 +0000 UTC m=+42.078977615 container died ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 23 04:00:55 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.timer: Deactivated successfully. Feb 23 04:00:55 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863. Feb 23 04:00:55 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Failed to open /run/systemd/transient/ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: No such file or directory Feb 23 04:00:55 localhost systemd[1]: tmp-crun.OgrHI0.mount: Deactivated successfully. Feb 23 04:00:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863-userdata-shm.mount: Deactivated successfully. Feb 23 04:00:55 localhost systemd[1]: var-lib-containers-storage-overlay-9d9e45d9514da84d6daa9fa4a7ce739f1d3aea192977320b7cd282ec5c552f31-merged.mount: Deactivated successfully. Feb 23 04:00:55 localhost podman[108561]: 2026-02-23 09:00:55.687546841 +0000 UTC m=+42.145755854 container cleanup ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, distribution-scope=public, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 04:00:55 localhost podman[108561]: ceilometer_agent_ipmi Feb 23 04:00:55 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.timer: Failed to open /run/systemd/transient/ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.timer: No such file or directory Feb 23 04:00:55 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Failed to open /run/systemd/transient/ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: No such file or directory Feb 23 04:00:55 localhost podman[108867]: 2026-02-23 09:00:55.755224978 +0000 UTC m=+0.123258753 container cleanup ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, version=17.1.13, build-date=2026-01-12T23:07:30Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1766032510) Feb 23 04:00:55 localhost systemd[1]: libpod-conmon-ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.scope: Deactivated successfully. Feb 23 04:00:55 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.timer: Failed to open /run/systemd/transient/ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.timer: No such file or directory Feb 23 04:00:55 localhost systemd[1]: ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: Failed to open /run/systemd/transient/ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863.service: No such file or directory Feb 23 04:00:55 localhost podman[108881]: 2026-02-23 09:00:55.857685832 +0000 UTC m=+0.066702858 container cleanup ac9a1ee8e38cb0b8fdfe7feefb4038c880d86e1d60f7b9e51c88bc40c7e2b863 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '209b2ea170f45545f80720644a8137d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=) Feb 23 04:00:55 localhost podman[108881]: ceilometer_agent_ipmi Feb 23 04:00:55 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Feb 23 04:00:55 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Feb 23 04:00:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:f0:86:db MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=43016 SEQ=3340069442 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Feb 23 04:00:56 localhost python3.9[108985]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:00:56 localhost systemd[1]: Reloading. Feb 23 04:00:56 localhost systemd-sysv-generator[109019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:00:56 localhost systemd-rc-local-generator[109012]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:00:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:00:57 localhost systemd[1]: Stopping collectd container... Feb 23 04:00:57 localhost systemd[1]: libpod-1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.scope: Deactivated successfully. Feb 23 04:00:57 localhost systemd[1]: libpod-1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.scope: Consumed 1.805s CPU time. Feb 23 04:00:57 localhost podman[109026]: 2026-02-23 09:00:57.750700844 +0000 UTC m=+0.715883776 container died 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-type=git) Feb 23 04:00:57 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.timer: Deactivated successfully. Feb 23 04:00:57 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b. Feb 23 04:00:57 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Failed to open /run/systemd/transient/1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: No such file or directory Feb 23 04:00:57 localhost systemd[1]: tmp-crun.pY88RC.mount: Deactivated successfully. Feb 23 04:00:57 localhost podman[109026]: 2026-02-23 09:00:57.809004863 +0000 UTC m=+0.774187805 container cleanup 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, version=17.1.13, release=1766032510, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 23 04:00:57 localhost podman[109026]: collectd Feb 23 04:00:57 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.timer: Failed to open /run/systemd/transient/1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.timer: No such file or directory Feb 23 04:00:57 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Failed to open /run/systemd/transient/1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: No such file or directory Feb 23 04:00:57 localhost podman[109041]: 2026-02-23 09:00:57.842025626 +0000 UTC m=+0.080209242 container cleanup 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20260112.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510) Feb 23 04:00:57 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:00:57 localhost systemd[1]: libpod-conmon-1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.scope: Deactivated successfully. Feb 23 04:00:57 localhost podman[109071]: error opening file `/run/crun/1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b/status`: No such file or directory Feb 23 04:00:57 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.timer: Failed to open /run/systemd/transient/1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.timer: No such file or directory Feb 23 04:00:57 localhost systemd[1]: 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: Failed to open /run/systemd/transient/1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b.service: No such file or directory Feb 23 04:00:57 localhost podman[109058]: 2026-02-23 09:00:57.941792847 +0000 UTC m=+0.073699292 container cleanup 1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, container_name=collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible) Feb 23 04:00:57 localhost podman[109058]: collectd Feb 23 04:00:57 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'. Feb 23 04:00:57 localhost systemd[1]: Stopped collectd container. Feb 23 04:00:58 localhost systemd[1]: var-lib-containers-storage-overlay-52c78c509d85cf7de25ab03c4f1696d08f61fbbe2c31ef788a4d3ccaab70010a-merged.mount: Deactivated successfully. Feb 23 04:00:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1464ab1a4dc6c2fa3a13a65489b50913ab04b7faa064960c7b5029499365f81b-userdata-shm.mount: Deactivated successfully. Feb 23 04:00:58 localhost python3.9[109164]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:00:58 localhost systemd[1]: Reloading. Feb 23 04:00:58 localhost systemd-rc-local-generator[109190]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:00:58 localhost systemd-sysv-generator[109193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:00:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:00:59 localhost systemd[1]: Stopping iscsid container... Feb 23 04:00:59 localhost systemd[1]: libpod-828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.scope: Deactivated successfully. Feb 23 04:00:59 localhost podman[109205]: 2026-02-23 09:00:59.160707666 +0000 UTC m=+0.080100669 container died 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 23 04:00:59 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.timer: Deactivated successfully. Feb 23 04:00:59 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc. Feb 23 04:00:59 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Failed to open /run/systemd/transient/828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: No such file or directory Feb 23 04:00:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc-userdata-shm.mount: Deactivated successfully. Feb 23 04:00:59 localhost systemd[1]: var-lib-containers-storage-overlay-cf7ff2e7cc809310771ac3fc05b2a8f4a3ab1e20a385f107124aff4afabfa04a-merged.mount: Deactivated successfully. Feb 23 04:00:59 localhost podman[109205]: 2026-02-23 09:00:59.210830334 +0000 UTC m=+0.130223307 container cleanup 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1766032510, batch=17.1_20260112.1) Feb 23 04:00:59 localhost podman[109205]: iscsid Feb 23 04:00:59 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.timer: Failed to open /run/systemd/transient/828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.timer: No such file or directory Feb 23 04:00:59 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Failed to open /run/systemd/transient/828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: No such file or directory Feb 23 04:00:59 localhost podman[109217]: 2026-02-23 09:00:59.242747823 +0000 UTC m=+0.071555646 container cleanup 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 23 04:00:59 localhost systemd[1]: libpod-conmon-828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.scope: Deactivated successfully. Feb 23 04:00:59 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.timer: Failed to open /run/systemd/transient/828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.timer: No such file or directory Feb 23 04:00:59 localhost systemd[1]: 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: Failed to open /run/systemd/transient/828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc.service: No such file or directory Feb 23 04:00:59 localhost podman[109233]: 2026-02-23 09:00:59.35445809 +0000 UTC m=+0.072682871 container cleanup 828edaefcf3f079c41d04f2fcbd5333cc720ce688d4fac709fa8d49f676a6bdc (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 04:00:59 localhost podman[109233]: iscsid Feb 23 04:00:59 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Feb 23 04:00:59 localhost systemd[1]: Stopped iscsid container. Feb 23 04:01:00 localhost python3.9[109337]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:01:00 localhost systemd[1]: Reloading. Feb 23 04:01:00 localhost systemd-sysv-generator[109366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:01:00 localhost systemd-rc-local-generator[109362]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:01:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:01:00 localhost systemd[1]: Stopping logrotate_crond container... Feb 23 04:01:00 localhost systemd[1]: libpod-8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.scope: Deactivated successfully. Feb 23 04:01:00 localhost podman[109378]: 2026-02-23 09:01:00.630183843 +0000 UTC m=+0.081700099 container died 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git) Feb 23 04:01:00 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.timer: Deactivated successfully. Feb 23 04:01:00 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276. Feb 23 04:01:00 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Failed to open /run/systemd/transient/8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: No such file or directory Feb 23 04:01:00 localhost podman[109378]: 2026-02-23 09:01:00.680419434 +0000 UTC m=+0.131935710 container cleanup 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 04:01:00 localhost podman[109378]: logrotate_crond Feb 23 04:01:00 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.timer: Failed to open /run/systemd/transient/8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.timer: No such file or directory Feb 23 04:01:00 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Failed to open /run/systemd/transient/8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: No such file or directory Feb 23 04:01:00 localhost podman[109391]: 2026-02-23 09:01:00.72038141 +0000 UTC m=+0.080010046 container cleanup 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, architecture=x86_64, release=1766032510, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, container_name=logrotate_crond, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1) Feb 23 04:01:00 localhost systemd[1]: libpod-conmon-8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.scope: Deactivated successfully. Feb 23 04:01:00 localhost podman[109421]: error opening file `/run/crun/8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276/status`: No such file or directory Feb 23 04:01:00 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.timer: Failed to open /run/systemd/transient/8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.timer: No such file or directory Feb 23 04:01:00 localhost systemd[1]: 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: Failed to open /run/systemd/transient/8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276.service: No such file or directory Feb 23 04:01:00 localhost podman[109408]: 2026-02-23 09:01:00.837604127 +0000 UTC m=+0.080945155 container cleanup 8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, com.redhat.component=openstack-cron-container, vcs-type=git, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, architecture=x86_64, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Feb 23 04:01:00 localhost podman[109408]: logrotate_crond Feb 23 04:01:00 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Feb 23 04:01:00 localhost systemd[1]: Stopped logrotate_crond container. Feb 23 04:01:01 localhost python3.9[109514]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:01:01 localhost systemd[1]: var-lib-containers-storage-overlay-d1d734fa5d6fa4a105819fcbe3ae6278295f7115eb830775cb18f638504a55ec-merged.mount: Deactivated successfully. Feb 23 04:01:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a60de27706ec4b304fd21181c28347472c1a56ab4961fe9097f70c49ba9c276-userdata-shm.mount: Deactivated successfully. Feb 23 04:01:01 localhost systemd[1]: Reloading. Feb 23 04:01:01 localhost systemd-rc-local-generator[109537]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:01:01 localhost systemd-sysv-generator[109540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:01:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:01:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=663 DF PROTO=TCP SPT=47236 DPT=9105 SEQ=3880608294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010A30130000000001030307) Feb 23 04:01:02 localhost systemd[1]: Stopping metrics_qdr container... Feb 23 04:01:02 localhost kernel: qdrouterd[55140]: segfault at 0 ip 00007fefd930b7cb sp 00007ffd5f915af0 error 4 in libc.so.6[7fefd92a8000+175000] Feb 23 04:01:02 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Feb 23 04:01:02 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Feb 23 04:01:02 localhost systemd[1]: Started Process Core Dump (PID 109578/UID 0). Feb 23 04:01:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:f0:86:db MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=43016 SEQ=3340069442 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Feb 23 04:01:02 localhost systemd-coredump[109579]: Resource limits disable core dumping for process 55140 (qdrouterd). Feb 23 04:01:02 localhost systemd-coredump[109579]: Process 55140 (qdrouterd) of user 42465 dumped core. Feb 23 04:01:02 localhost systemd[1]: systemd-coredump@0-109578-0.service: Deactivated successfully. Feb 23 04:01:02 localhost podman[109565]: 2026-02-23 09:01:02.264110775 +0000 UTC m=+0.206567849 container died 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr) Feb 23 04:01:02 localhost systemd[1]: libpod-779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.scope: Deactivated successfully. Feb 23 04:01:02 localhost systemd[1]: libpod-779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.scope: Consumed 28.714s CPU time. Feb 23 04:01:02 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.timer: Deactivated successfully. Feb 23 04:01:02 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5. Feb 23 04:01:02 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Failed to open /run/systemd/transient/779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: No such file or directory Feb 23 04:01:02 localhost podman[109565]: 2026-02-23 09:01:02.321788824 +0000 UTC m=+0.264245898 container cleanup 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_id=tripleo_step1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd) Feb 23 04:01:02 localhost podman[109565]: metrics_qdr Feb 23 04:01:02 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.timer: Failed to open /run/systemd/transient/779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.timer: No such file or directory Feb 23 04:01:02 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Failed to open /run/systemd/transient/779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: No such file or directory Feb 23 04:01:02 localhost podman[109585]: 2026-02-23 09:01:02.382479537 +0000 UTC m=+0.106869530 container cleanup 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, release=1766032510, vcs-type=git, io.openshift.expose-services=, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, url=https://www.redhat.com) Feb 23 04:01:02 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Feb 23 04:01:02 localhost systemd[1]: libpod-conmon-779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.scope: Deactivated successfully. Feb 23 04:01:02 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.timer: Failed to open /run/systemd/transient/779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.timer: No such file or directory Feb 23 04:01:02 localhost systemd[1]: 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: Failed to open /run/systemd/transient/779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5.service: No such file or directory Feb 23 04:01:02 localhost podman[109598]: 2026-02-23 09:01:02.47676524 +0000 UTC m=+0.062253341 container cleanup 779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b2fe9ad44af593cfea29d5504ea414bc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:01:02 localhost podman[109598]: metrics_qdr Feb 23 04:01:02 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Feb 23 04:01:02 localhost systemd[1]: Stopped metrics_qdr container. Feb 23 04:01:02 localhost systemd[1]: var-lib-containers-storage-overlay-3e3b6cf8a686f25c71602c058fe0b6ad924b3e6f22bfe2d699e90cd91e187aeb-merged.mount: Deactivated successfully. Feb 23 04:01:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-779288c5cfa77b3d91d7ceda381ec04092f7a01eccc747ba9e54196d498b57a5-userdata-shm.mount: Deactivated successfully. Feb 23 04:01:03 localhost python3.9[109704]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:01:03 localhost sshd[109706]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:01:03 localhost python3.9[109799]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:01:04 localhost python3.9[109892]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:01:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=665 DF PROTO=TCP SPT=47236 DPT=9105 SEQ=3880608294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010A3C030000000001030307) Feb 23 04:01:05 localhost python3.9[109985]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:01:05 localhost systemd[1]: Reloading. Feb 23 04:01:05 localhost systemd-rc-local-generator[110012]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:01:05 localhost systemd-sysv-generator[110017]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:01:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:01:05 localhost systemd[1]: Stopping nova_compute container... Feb 23 04:01:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:f0:86:db MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=43016 SEQ=3340069442 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Feb 23 04:01:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26544 DF PROTO=TCP SPT=54756 DPT=9101 SEQ=3088868752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010A54800000000001030307) Feb 23 04:01:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 04:01:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 04:01:13 localhost podman[110038]: Error: container 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea is not running Feb 23 04:01:13 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=125/n/a Feb 23 04:01:13 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 04:01:13 localhost systemd[1]: tmp-crun.7iOj8O.mount: Deactivated successfully. Feb 23 04:01:13 localhost podman[110037]: 2026-02-23 09:01:13.57150551 +0000 UTC m=+0.148276551 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, container_name=ovn_controller, distribution-scope=public, architecture=x86_64) Feb 23 04:01:13 localhost podman[110037]: 2026-02-23 09:01:13.587664266 +0000 UTC m=+0.164435317 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1766032510, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 23 04:01:13 localhost podman[110037]: unhealthy Feb 23 04:01:13 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:01:13 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 04:01:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39663 DF PROTO=TCP SPT=47898 DPT=9102 SEQ=1466840839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010A5F830000000001030307) Feb 23 04:01:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 04:01:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 04:01:14 localhost podman[110070]: 2026-02-23 09:01:14.992412696 +0000 UTC m=+0.068349408 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc.) Feb 23 04:01:15 localhost systemd[1]: tmp-crun.zdekmt.mount: Deactivated successfully. Feb 23 04:01:15 localhost podman[110071]: 2026-02-23 09:01:15.075187606 +0000 UTC m=+0.144407742 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 23 04:01:15 localhost podman[110071]: 2026-02-23 09:01:15.114520543 +0000 UTC m=+0.183740639 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 23 04:01:15 localhost podman[110071]: unhealthy Feb 23 04:01:15 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:01:15 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 04:01:15 localhost podman[110070]: 2026-02-23 09:01:15.356235499 +0000 UTC m=+0.432172161 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target) Feb 23 04:01:15 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 04:01:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=667 DF PROTO=TCP SPT=47236 DPT=9105 SEQ=3880608294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010A6B830000000001030307) Feb 23 04:01:19 localhost sshd[110113]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:01:20 localhost sshd[110115]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:01:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52001 DF PROTO=TCP SPT=38092 DPT=9100 SEQ=12150323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010A7ABF0000000001030307) Feb 23 04:01:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52003 DF PROTO=TCP SPT=38092 DPT=9100 SEQ=12150323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010A86C40000000001030307) Feb 23 04:01:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26548 DF PROTO=TCP SPT=54756 DPT=9101 SEQ=3088868752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010A91830000000001030307) Feb 23 04:01:27 localhost sshd[110227]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:01:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60768 DF PROTO=TCP SPT=54722 DPT=9105 SEQ=1984194147 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010AA5430000000001030307) Feb 23 04:01:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60769 DF PROTO=TCP SPT=54722 DPT=9105 SEQ=1984194147 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010AA9430000000001030307) Feb 23 04:01:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60770 DF PROTO=TCP SPT=54722 DPT=9105 SEQ=1984194147 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010AB1430000000001030307) Feb 23 04:01:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60771 DF PROTO=TCP SPT=54722 DPT=9105 SEQ=1984194147 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010AC1030000000001030307) Feb 23 04:01:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11507 DF PROTO=TCP SPT=34554 DPT=9101 SEQ=4095805020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010AC9B00000000001030307) Feb 23 04:01:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 04:01:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 04:01:43 localhost podman[110245]: Error: container 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea is not running Feb 23 04:01:43 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Main process exited, code=exited, status=125/n/a Feb 23 04:01:43 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed with result 'exit-code'. Feb 23 04:01:44 localhost podman[110244]: 2026-02-23 09:01:44.051212526 +0000 UTC m=+0.127721379 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ovn_controller, io.buildah.version=1.41.5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 23 04:01:44 localhost podman[110244]: 2026-02-23 09:01:44.095779533 +0000 UTC m=+0.172288306 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510) Feb 23 04:01:44 localhost podman[110244]: unhealthy Feb 23 04:01:44 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:01:44 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 04:01:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56003 DF PROTO=TCP SPT=33662 DPT=9882 SEQ=2106663543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010AD5840000000001030307) Feb 23 04:01:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 04:01:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 04:01:45 localhost podman[110275]: 2026-02-23 09:01:45.99707394 +0000 UTC m=+0.074357203 container health_status 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4) Feb 23 04:01:46 localhost podman[110276]: 2026-02-23 09:01:46.054144291 +0000 UTC m=+0.129500085 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20260112.1, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true) Feb 23 04:01:46 localhost podman[110276]: 2026-02-23 09:01:46.067587693 +0000 UTC m=+0.142943467 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 23 04:01:46 localhost podman[110276]: unhealthy Feb 23 04:01:46 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:01:46 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 04:01:46 localhost podman[110275]: 2026-02-23 09:01:46.340667073 +0000 UTC m=+0.417950276 container exec_died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, batch=17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 23 04:01:46 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Deactivated successfully. Feb 23 04:01:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60772 DF PROTO=TCP SPT=54722 DPT=9105 SEQ=1984194147 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010AE1840000000001030307) Feb 23 04:01:47 localhost podman[110025]: time="2026-02-23T09:01:47Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Feb 23 04:01:47 localhost systemd[1]: libpod-87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.scope: Deactivated successfully. Feb 23 04:01:47 localhost systemd[1]: libpod-87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.scope: Consumed 27.864s CPU time. Feb 23 04:01:47 localhost podman[110025]: 2026-02-23 09:01:47.912152118 +0000 UTC m=+42.102034165 container died 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, version=17.1.13, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 23 04:01:47 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.timer: Deactivated successfully. Feb 23 04:01:47 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea. Feb 23 04:01:47 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed to open /run/systemd/transient/87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: No such file or directory Feb 23 04:01:47 localhost systemd[1]: tmp-crun.ZHPfy7.mount: Deactivated successfully. Feb 23 04:01:47 localhost systemd[1]: var-lib-containers-storage-overlay-34739f95a235983f1fb7239ff664b6ee8463858b2ebd021927c4c756cf80d140-merged.mount: Deactivated successfully. Feb 23 04:01:48 localhost podman[110025]: 2026-02-23 09:01:48.022808943 +0000 UTC m=+42.212690960 container cleanup 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, vcs-type=git, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible) Feb 23 04:01:48 localhost podman[110025]: nova_compute Feb 23 04:01:48 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.timer: Failed to open /run/systemd/transient/87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.timer: No such file or directory Feb 23 04:01:48 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed to open /run/systemd/transient/87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: No such file or directory Feb 23 04:01:48 localhost podman[110315]: 2026-02-23 09:01:48.038363811 +0000 UTC m=+0.118375454 container cleanup 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Feb 23 04:01:48 localhost systemd[1]: libpod-conmon-87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.scope: Deactivated successfully. Feb 23 04:01:48 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.timer: Failed to open /run/systemd/transient/87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.timer: No such file or directory Feb 23 04:01:48 localhost systemd[1]: 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: Failed to open /run/systemd/transient/87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea.service: No such file or directory Feb 23 04:01:48 localhost podman[110330]: 2026-02-23 09:01:48.131154138 +0000 UTC m=+0.061429246 container cleanup 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:01:48 localhost podman[110330]: nova_compute Feb 23 04:01:48 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Feb 23 04:01:48 localhost systemd[1]: Stopped nova_compute container. Feb 23 04:01:48 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.110s CPU time, no IO. Feb 23 04:01:48 localhost python3.9[110434]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:01:48 localhost systemd[1]: Reloading. Feb 23 04:01:48 localhost systemd-sysv-generator[110461]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:01:48 localhost systemd-rc-local-generator[110457]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:01:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:01:49 localhost systemd[1]: Stopping nova_migration_target container... Feb 23 04:01:49 localhost systemd[1]: libpod-4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.scope: Deactivated successfully. Feb 23 04:01:49 localhost systemd[1]: libpod-4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.scope: Consumed 34.253s CPU time. Feb 23 04:01:49 localhost podman[110474]: 2026-02-23 09:01:49.28435745 +0000 UTC m=+0.074623090 container died 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, version=17.1.13, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 04:01:49 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.timer: Deactivated successfully. Feb 23 04:01:49 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06. Feb 23 04:01:49 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Failed to open /run/systemd/transient/4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: No such file or directory Feb 23 04:01:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06-userdata-shm.mount: Deactivated successfully. Feb 23 04:01:49 localhost systemd[1]: var-lib-containers-storage-overlay-0c8308c71cbe12c222d2d2f2aba4033b465662c05472c3bcfaab28cf167545b8-merged.mount: Deactivated successfully. Feb 23 04:01:49 localhost podman[110474]: 2026-02-23 09:01:49.3264234 +0000 UTC m=+0.116689000 container cleanup 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 23 04:01:49 localhost podman[110474]: nova_migration_target Feb 23 04:01:49 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.timer: Failed to open /run/systemd/transient/4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.timer: No such file or directory Feb 23 04:01:49 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Failed to open /run/systemd/transient/4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: No such file or directory Feb 23 04:01:49 localhost podman[110486]: 2026-02-23 09:01:49.368483591 +0000 UTC m=+0.073443174 container cleanup 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, vendor=Red Hat, Inc.) Feb 23 04:01:49 localhost systemd[1]: libpod-conmon-4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.scope: Deactivated successfully. Feb 23 04:01:49 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.timer: Failed to open /run/systemd/transient/4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.timer: No such file or directory Feb 23 04:01:49 localhost systemd[1]: 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: Failed to open /run/systemd/transient/4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06.service: No such file or directory Feb 23 04:01:49 localhost podman[110504]: 2026-02-23 09:01:49.463789306 +0000 UTC m=+0.067113870 container cleanup 4f811ff14ccd87c3367cd80440bab3b2d7c971c4c1df9b5f11df0e6275a61d06 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, container_name=nova_migration_target, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Feb 23 04:01:49 localhost podman[110504]: nova_migration_target Feb 23 04:01:49 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Feb 23 04:01:49 localhost systemd[1]: Stopped nova_migration_target container. Feb 23 04:01:51 localhost python3.9[110607]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:01:51 localhost systemd[1]: Reloading. Feb 23 04:01:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49242 DF PROTO=TCP SPT=48218 DPT=9100 SEQ=1470427337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010AEFEF0000000001030307) Feb 23 04:01:51 localhost systemd-rc-local-generator[110632]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:01:51 localhost systemd-sysv-generator[110637]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:01:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:01:51 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Feb 23 04:01:51 localhost systemd[1]: libpod-c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e.scope: Deactivated successfully. Feb 23 04:01:51 localhost podman[110648]: 2026-02-23 09:01:51.446955163 +0000 UTC m=+0.070336859 container died c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20260112.1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:31:49Z, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, vendor=Red Hat, Inc.) Feb 23 04:01:51 localhost podman[110648]: 2026-02-23 09:01:51.489179449 +0000 UTC m=+0.112561105 container cleanup c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, config_id=tripleo_step3, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=nova_virtlogd_wrapper, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 23 04:01:51 localhost podman[110648]: nova_virtlogd_wrapper Feb 23 04:01:51 localhost podman[110662]: 2026-02-23 09:01:51.557966209 +0000 UTC m=+0.100641008 container cleanup c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, config_id=tripleo_step3, container_name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, build-date=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container) Feb 23 04:01:52 localhost systemd[1]: var-lib-containers-storage-overlay-f517692be756bbbec8b52ba00fac8538d0b4cc258170a641ad09cd15a7f1f00b-merged.mount: Deactivated successfully. Feb 23 04:01:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e-userdata-shm.mount: Deactivated successfully. Feb 23 04:01:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52006 DF PROTO=TCP SPT=38092 DPT=9100 SEQ=12150323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010AF7830000000001030307) Feb 23 04:01:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11511 DF PROTO=TCP SPT=34554 DPT=9101 SEQ=4095805020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B05830000000001030307) Feb 23 04:02:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14062 DF PROTO=TCP SPT=47790 DPT=9105 SEQ=1988950126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B1A730000000001030307) Feb 23 04:02:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14063 DF PROTO=TCP SPT=47790 DPT=9105 SEQ=1988950126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B1E830000000001030307) Feb 23 04:02:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14064 DF PROTO=TCP SPT=47790 DPT=9105 SEQ=1988950126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B26830000000001030307) Feb 23 04:02:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14065 DF PROTO=TCP SPT=47790 DPT=9105 SEQ=1988950126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B36430000000001030307) Feb 23 04:02:10 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 04:02:10 localhost recover_tripleo_nova_virtqemud[110679]: 62457 Feb 23 04:02:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 23 04:02:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 23 04:02:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2136 DF PROTO=TCP SPT=43070 DPT=9101 SEQ=1404170234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B3EDF0000000001030307) Feb 23 04:02:13 localhost sshd[110680]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:02:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48017 DF PROTO=TCP SPT=52464 DPT=9102 SEQ=2042379119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B49830000000001030307) Feb 23 04:02:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 04:02:14 localhost podman[110682]: 2026-02-23 09:02:14.254960075 +0000 UTC m=+0.079467739 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, container_name=ovn_controller, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 04:02:14 localhost podman[110682]: 2026-02-23 09:02:14.278711575 +0000 UTC m=+0.103219169 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1) Feb 23 04:02:14 localhost podman[110682]: unhealthy Feb 23 04:02:14 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:02:14 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 04:02:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 04:02:16 localhost systemd[1]: tmp-crun.G6bCMp.mount: Deactivated successfully. Feb 23 04:02:16 localhost podman[110701]: 2026-02-23 09:02:16.502667629 +0000 UTC m=+0.081450180 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 23 04:02:16 localhost podman[110701]: 2026-02-23 09:02:16.516924188 +0000 UTC m=+0.095706769 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 23 04:02:16 localhost podman[110701]: unhealthy Feb 23 04:02:16 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:02:16 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 04:02:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14066 DF PROTO=TCP SPT=47790 DPT=9105 SEQ=1988950126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B57840000000001030307) Feb 23 04:02:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48257 DF PROTO=TCP SPT=49994 DPT=9100 SEQ=487237506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B651E0000000001030307) Feb 23 04:02:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48259 DF PROTO=TCP SPT=49994 DPT=9100 SEQ=487237506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B71430000000001030307) Feb 23 04:02:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2140 DF PROTO=TCP SPT=43070 DPT=9101 SEQ=1404170234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B7B830000000001030307) Feb 23 04:02:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45180 DF PROTO=TCP SPT=46196 DPT=9105 SEQ=2995905106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B8FA30000000001030307) Feb 23 04:02:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45181 DF PROTO=TCP SPT=46196 DPT=9105 SEQ=2995905106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B93C30000000001030307) Feb 23 04:02:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45182 DF PROTO=TCP SPT=46196 DPT=9105 SEQ=2995905106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010B9BC30000000001030307) Feb 23 04:02:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45183 DF PROTO=TCP SPT=46196 DPT=9105 SEQ=2995905106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010BAB840000000001030307) Feb 23 04:02:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49812 DF PROTO=TCP SPT=57116 DPT=9101 SEQ=1615761852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010BB4100000000001030307) Feb 23 04:02:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37383 DF PROTO=TCP SPT=34450 DPT=9102 SEQ=695959403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010BBF840000000001030307) Feb 23 04:02:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 04:02:44 localhost podman[110799]: 2026-02-23 09:02:44.75487379 +0000 UTC m=+0.080104938 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z) Feb 23 04:02:44 localhost podman[110799]: 2026-02-23 09:02:44.767742325 +0000 UTC m=+0.092973483 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 23 04:02:44 localhost podman[110799]: unhealthy Feb 23 04:02:44 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:02:44 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 04:02:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 04:02:46 localhost systemd[1]: tmp-crun.zaYb5E.mount: Deactivated successfully. Feb 23 04:02:47 localhost podman[110819]: 2026-02-23 09:02:46.999682626 +0000 UTC m=+0.080896573 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13) Feb 23 04:02:47 localhost podman[110819]: 2026-02-23 09:02:47.016295816 +0000 UTC m=+0.097509763 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 04:02:47 localhost podman[110819]: unhealthy Feb 23 04:02:47 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:02:47 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 04:02:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45184 DF PROTO=TCP SPT=46196 DPT=9105 SEQ=2995905106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010BCB830000000001030307) Feb 23 04:02:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20696 DF PROTO=TCP SPT=55680 DPT=9100 SEQ=465143757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010BDA4F0000000001030307) Feb 23 04:02:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20698 DF PROTO=TCP SPT=55680 DPT=9100 SEQ=465143757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010BE6430000000001030307) Feb 23 04:02:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49816 DF PROTO=TCP SPT=57116 DPT=9101 SEQ=1615761852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010BEF830000000001030307) Feb 23 04:03:00 localhost sshd[110838]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45098 DF PROTO=TCP SPT=60312 DPT=9105 SEQ=864877561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C04D30000000001030307) Feb 23 04:03:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45099 DF PROTO=TCP SPT=60312 DPT=9105 SEQ=864877561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C08C40000000001030307) Feb 23 04:03:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45100 DF PROTO=TCP SPT=60312 DPT=9105 SEQ=864877561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C10C40000000001030307) Feb 23 04:03:06 localhost sshd[110840]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45101 DF PROTO=TCP SPT=60312 DPT=9105 SEQ=864877561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C20840000000001030307) Feb 23 04:03:09 localhost sshd[110842]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45203 DF PROTO=TCP SPT=53812 DPT=9101 SEQ=552804930 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C29400000000001030307) Feb 23 04:03:12 localhost sshd[110844]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45205 DF PROTO=TCP SPT=53812 DPT=9101 SEQ=552804930 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C35430000000001030307) Feb 23 04:03:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 04:03:15 localhost systemd[1]: tmp-crun.kJYBJD.mount: Deactivated successfully. Feb 23 04:03:15 localhost podman[110846]: 2026-02-23 09:03:15.015182152 +0000 UTC m=+0.091957142 container health_status 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, container_name=ovn_controller, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:03:15 localhost podman[110846]: 2026-02-23 09:03:15.059942505 +0000 UTC m=+0.136717495 container exec_died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, release=1766032510, container_name=ovn_controller) Feb 23 04:03:15 localhost podman[110846]: unhealthy Feb 23 04:03:15 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:03:15 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed with result 'exit-code'. Feb 23 04:03:15 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Feb 23 04:03:15 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61686 (conmon) with signal SIGKILL. Feb 23 04:03:15 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Feb 23 04:03:15 localhost systemd[1]: libpod-conmon-c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e.scope: Deactivated successfully. Feb 23 04:03:15 localhost podman[110877]: error opening file `/run/crun/c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e/status`: No such file or directory Feb 23 04:03:15 localhost sshd[110878]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:15 localhost podman[110866]: 2026-02-23 09:03:15.752193145 +0000 UTC m=+0.074295460 container cleanup c2adc26eff5b9c8a26fccd39d4f074c9536ce1fe2a4c09d9fb2faacfe924a94e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc.) Feb 23 04:03:15 localhost podman[110866]: nova_virtlogd_wrapper Feb 23 04:03:15 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Feb 23 04:03:15 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Feb 23 04:03:16 localhost python3.9[110972]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:03:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40312 DF PROTO=TCP SPT=42692 DPT=9882 SEQ=3657875088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C3F830000000001030307) Feb 23 04:03:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 04:03:17 localhost podman[110974]: 2026-02-23 09:03:17.280506458 +0000 UTC m=+0.088996252 container health_status 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 23 04:03:17 localhost podman[110974]: 2026-02-23 09:03:17.319866275 +0000 UTC m=+0.128356039 container exec_died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Feb 23 04:03:17 localhost podman[110974]: unhealthy Feb 23 04:03:17 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:03:17 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed with result 'exit-code'. Feb 23 04:03:17 localhost sshd[110992]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:17 localhost systemd[1]: Reloading. Feb 23 04:03:17 localhost systemd-rc-local-generator[111022]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:03:17 localhost systemd-sysv-generator[111025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:03:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:03:18 localhost systemd[1]: Stopping nova_virtnodedevd container... Feb 23 04:03:18 localhost systemd[1]: libpod-50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496.scope: Deactivated successfully. Feb 23 04:03:18 localhost systemd[1]: libpod-50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496.scope: Consumed 1.335s CPU time. Feb 23 04:03:18 localhost podman[111034]: 2026-02-23 09:03:18.116101755 +0000 UTC m=+0.078401826 container died 50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtnodedevd, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.5) Feb 23 04:03:18 localhost podman[111034]: 2026-02-23 09:03:18.156025201 +0000 UTC m=+0.118325202 container cleanup 50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 04:03:18 localhost podman[111034]: nova_virtnodedevd Feb 23 04:03:18 localhost podman[111047]: 2026-02-23 09:03:18.204522289 +0000 UTC m=+0.072119654 container cleanup 50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtnodedevd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:31:49Z, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3) Feb 23 04:03:18 localhost systemd[1]: libpod-conmon-50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496.scope: Deactivated successfully. Feb 23 04:03:18 localhost systemd[1]: var-lib-containers-storage-overlay-fd2aa44f25d5aa8bbb54bb02b77fdbdfb05a4397b4efc62b47a02ba2f31ca967-merged.mount: Deactivated successfully. Feb 23 04:03:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496-userdata-shm.mount: Deactivated successfully. Feb 23 04:03:18 localhost podman[111074]: error opening file `/run/crun/50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496/status`: No such file or directory Feb 23 04:03:18 localhost podman[111063]: 2026-02-23 09:03:18.304081123 +0000 UTC m=+0.066656865 container cleanup 50a7d2ed093fa42d73c6c19f205cf9a2203ca7a7045a875f2001640aa6517496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:31:49Z, distribution-scope=public, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=nova_virtnodedevd, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible) Feb 23 04:03:18 localhost podman[111063]: nova_virtnodedevd Feb 23 04:03:18 localhost systemd[1]: tmp-crun.RSpp6S.mount: Deactivated successfully. Feb 23 04:03:18 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Feb 23 04:03:18 localhost systemd[1]: Stopped nova_virtnodedevd container. Feb 23 04:03:19 localhost python3.9[111168]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:03:19 localhost systemd[1]: Reloading. Feb 23 04:03:19 localhost systemd-sysv-generator[111196]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:03:19 localhost systemd-rc-local-generator[111191]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:03:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:03:19 localhost systemd[1]: Stopping nova_virtproxyd container... Feb 23 04:03:19 localhost systemd[1]: libpod-a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92.scope: Deactivated successfully. Feb 23 04:03:19 localhost podman[111208]: 2026-02-23 09:03:19.404764325 +0000 UTC m=+0.053872334 container died a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-type=git, batch=17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtproxyd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc.) Feb 23 04:03:19 localhost podman[111208]: 2026-02-23 09:03:19.43492374 +0000 UTC m=+0.084031739 container cleanup a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, release=1766032510, architecture=x86_64, build-date=2026-01-12T23:31:49Z, config_id=tripleo_step3, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, version=17.1.13, managed_by=tripleo_ansible, container_name=nova_virtproxyd, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 04:03:19 localhost podman[111208]: nova_virtproxyd Feb 23 04:03:19 localhost podman[111223]: 2026-02-23 09:03:19.503219615 +0000 UTC m=+0.093294883 container cleanup a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, version=17.1.13, build-date=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, config_id=tripleo_step3, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, container_name=nova_virtproxyd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20260112.1) Feb 23 04:03:19 localhost systemd[1]: libpod-conmon-a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92.scope: Deactivated successfully. Feb 23 04:03:19 localhost podman[111252]: error opening file `/run/crun/a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92/status`: No such file or directory Feb 23 04:03:19 localhost podman[111240]: 2026-02-23 09:03:19.579366942 +0000 UTC m=+0.046178648 container cleanup a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, container_name=nova_virtproxyd, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, architecture=x86_64, name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:31:49Z) Feb 23 04:03:19 localhost podman[111240]: nova_virtproxyd Feb 23 04:03:19 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Feb 23 04:03:19 localhost systemd[1]: Stopped nova_virtproxyd container. Feb 23 04:03:19 localhost sshd[111328]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:20 localhost python3.9[111348]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:03:20 localhost systemd[1]: var-lib-containers-storage-overlay-a6f754c62077bf37caa8ac647e3f2dd870b797112a74bfb7c91c34f0be7af204-merged.mount: Deactivated successfully. Feb 23 04:03:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7b4b542225a3b83a234f06a3641c8943c916ec8b505305def9df0ea969fba92-userdata-shm.mount: Deactivated successfully. Feb 23 04:03:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43995 DF PROTO=TCP SPT=56488 DPT=9100 SEQ=434696051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C4F7E0000000001030307) Feb 23 04:03:21 localhost systemd[1]: Reloading. Feb 23 04:03:21 localhost systemd-rc-local-generator[111374]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:03:21 localhost systemd-sysv-generator[111378]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:03:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:03:21 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 23 04:03:21 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Main process exited, code=killed, status=15/TERM Feb 23 04:03:21 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Failed with result 'signal'. Feb 23 04:03:21 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud. Feb 23 04:03:21 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Feb 23 04:03:21 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Feb 23 04:03:21 localhost systemd[1]: Stopping nova_virtqemud container... Feb 23 04:03:21 localhost systemd[1]: libpod-d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04.scope: Deactivated successfully. Feb 23 04:03:21 localhost systemd[1]: libpod-d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04.scope: Consumed 1.955s CPU time. Feb 23 04:03:21 localhost podman[111390]: 2026-02-23 09:03:21.752641313 +0000 UTC m=+0.071658579 container died d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vcs-type=git, container_name=nova_virtqemud, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, architecture=x86_64, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.expose-services=, url=https://www.redhat.com, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public) Feb 23 04:03:21 localhost systemd[1]: tmp-crun.EjgEFT.mount: Deactivated successfully. Feb 23 04:03:21 localhost podman[111390]: 2026-02-23 09:03:21.792226188 +0000 UTC m=+0.111243444 container cleanup d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtqemud, build-date=2026-01-12T23:31:49Z, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, version=17.1.13, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:03:21 localhost podman[111390]: nova_virtqemud Feb 23 04:03:21 localhost podman[111404]: 2026-02-23 09:03:21.82879623 +0000 UTC m=+0.062822669 container cleanup d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, container_name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, build-date=2026-01-12T23:31:49Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=) Feb 23 04:03:21 localhost systemd[1]: libpod-conmon-d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04.scope: Deactivated successfully. Feb 23 04:03:21 localhost podman[111432]: error opening file `/run/crun/d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04/status`: No such file or directory Feb 23 04:03:21 localhost podman[111419]: 2026-02-23 09:03:21.923229077 +0000 UTC m=+0.060558489 container cleanup d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtqemud, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Feb 23 04:03:21 localhost podman[111419]: nova_virtqemud Feb 23 04:03:21 localhost systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully. Feb 23 04:03:21 localhost systemd[1]: Stopped nova_virtqemud container. Feb 23 04:03:22 localhost sshd[111526]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:22 localhost python3.9[111525]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:03:22 localhost systemd[1]: var-lib-containers-storage-overlay-ad754dc0ccbca5f94297c9f7b89ba922c9a3f604bb7e57f1a8f4658470b198ae-merged.mount: Deactivated successfully. Feb 23 04:03:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6722d98893ec366bf5c47c8cc52a74ae4caee33e3af6210be5df446077f6b04-userdata-shm.mount: Deactivated successfully. Feb 23 04:03:22 localhost systemd[1]: Reloading. Feb 23 04:03:22 localhost systemd-rc-local-generator[111556]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:03:22 localhost systemd-sysv-generator[111560]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:03:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:03:23 localhost python3.9[111657]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:03:23 localhost systemd[1]: Reloading. Feb 23 04:03:23 localhost systemd-sysv-generator[111684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:03:23 localhost systemd-rc-local-generator[111681]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:03:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:03:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43997 DF PROTO=TCP SPT=56488 DPT=9100 SEQ=434696051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C5B830000000001030307) Feb 23 04:03:24 localhost systemd[1]: Stopping nova_virtsecretd container... Feb 23 04:03:24 localhost systemd[1]: libpod-324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26.scope: Deactivated successfully. Feb 23 04:03:24 localhost podman[111697]: 2026-02-23 09:03:24.293824542 +0000 UTC m=+0.080159340 container died 324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.buildah.version=1.41.5, batch=17.1_20260112.1, version=17.1.13, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=nova_virtsecretd, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z) Feb 23 04:03:24 localhost podman[111697]: 2026-02-23 09:03:24.330202818 +0000 UTC m=+0.116537566 container cleanup 324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtsecretd, org.opencontainers.image.created=2026-01-12T23:31:49Z, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1) Feb 23 04:03:24 localhost podman[111697]: nova_virtsecretd Feb 23 04:03:24 localhost podman[111711]: 2026-02-23 09:03:24.377350974 +0000 UTC m=+0.070475313 container cleanup 324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, release=1766032510, vcs-type=git, architecture=x86_64, build-date=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtsecretd, url=https://www.redhat.com) Feb 23 04:03:24 localhost systemd[1]: libpod-conmon-324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26.scope: Deactivated successfully. Feb 23 04:03:24 localhost podman[111739]: error opening file `/run/crun/324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26/status`: No such file or directory Feb 23 04:03:24 localhost podman[111726]: 2026-02-23 09:03:24.45055695 +0000 UTC m=+0.050140348 container cleanup 324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, container_name=nova_virtsecretd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 04:03:24 localhost podman[111726]: nova_virtsecretd Feb 23 04:03:24 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Feb 23 04:03:24 localhost systemd[1]: Stopped nova_virtsecretd container. Feb 23 04:03:25 localhost python3.9[111832]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:03:25 localhost systemd[1]: Reloading. Feb 23 04:03:25 localhost systemd-rc-local-generator[111857]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:03:25 localhost systemd-sysv-generator[111864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:03:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:03:25 localhost sshd[111870]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-324f8df060000db9c956f3e95ef75d74566bd4eb3c7012f06aca4d0fa9131e26-userdata-shm.mount: Deactivated successfully. Feb 23 04:03:25 localhost systemd[1]: var-lib-containers-storage-overlay-4fd7ed36ffe690aa6d37e7d89968f7ddc4fb4fc144d90fb2ee3732e0e8b1e009-merged.mount: Deactivated successfully. Feb 23 04:03:25 localhost systemd[1]: Stopping nova_virtstoraged container... Feb 23 04:03:25 localhost systemd[1]: libpod-2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc.scope: Deactivated successfully. Feb 23 04:03:25 localhost podman[111874]: 2026-02-23 09:03:25.606334312 +0000 UTC m=+0.059496646 container died 2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtstoraged, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, config_id=tripleo_step3, io.openshift.expose-services=, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T23:31:49Z) Feb 23 04:03:25 localhost podman[111874]: 2026-02-23 09:03:25.641302606 +0000 UTC m=+0.094464940 container cleanup 2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-libvirt-container, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 23 04:03:25 localhost podman[111874]: nova_virtstoraged Feb 23 04:03:25 localhost podman[111889]: 2026-02-23 09:03:25.6874222 +0000 UTC m=+0.064299134 container cleanup 2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, release=1766032510, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T23:31:49Z, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=nova_virtstoraged, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, io.buildah.version=1.41.5) Feb 23 04:03:25 localhost systemd[1]: libpod-conmon-2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc.scope: Deactivated successfully. Feb 23 04:03:25 localhost podman[111919]: error opening file `/run/crun/2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc/status`: No such file or directory Feb 23 04:03:25 localhost podman[111907]: 2026-02-23 09:03:25.768681204 +0000 UTC m=+0.047101567 container cleanup 2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, build-date=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, container_name=nova_virtstoraged, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8e86b11aed37635c57249fefb951044'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 23 04:03:25 localhost podman[111907]: nova_virtstoraged Feb 23 04:03:25 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Feb 23 04:03:25 localhost systemd[1]: Stopped nova_virtstoraged container. Feb 23 04:03:26 localhost systemd[1]: var-lib-containers-storage-overlay-d3971ccf5a3d6277b5f480b29d3cf6f87cd8292b24c0c7a3df165e34155d06d3-merged.mount: Deactivated successfully. Feb 23 04:03:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c504d86220256a9811f4c623f80bcbefa1d44d0424e1cd65152e2dcc898c4cc-userdata-shm.mount: Deactivated successfully. Feb 23 04:03:26 localhost python3.9[112012]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:03:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45207 DF PROTO=TCP SPT=53812 DPT=9101 SEQ=552804930 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C65830000000001030307) Feb 23 04:03:28 localhost sshd[112015]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:28 localhost systemd[1]: Reloading. Feb 23 04:03:28 localhost systemd-rc-local-generator[112039]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:03:28 localhost systemd-sysv-generator[112044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:03:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:03:28 localhost systemd[1]: Stopping ovn_controller container... Feb 23 04:03:29 localhost systemd[1]: libpod-393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.scope: Deactivated successfully. Feb 23 04:03:29 localhost systemd[1]: libpod-393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.scope: Consumed 2.410s CPU time. Feb 23 04:03:29 localhost podman[112055]: 2026-02-23 09:03:29.042775822 +0000 UTC m=+0.072633100 container died 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, build-date=2026-01-12T22:36:40Z) Feb 23 04:03:29 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.timer: Deactivated successfully. Feb 23 04:03:29 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2. Feb 23 04:03:29 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed to open /run/systemd/transient/393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: No such file or directory Feb 23 04:03:29 localhost systemd[1]: tmp-crun.c1NOcz.mount: Deactivated successfully. Feb 23 04:03:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2-userdata-shm.mount: Deactivated successfully. Feb 23 04:03:29 localhost podman[112055]: 2026-02-23 09:03:29.153139088 +0000 UTC m=+0.182996336 container cleanup 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 04:03:29 localhost podman[112055]: ovn_controller Feb 23 04:03:29 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.timer: Failed to open /run/systemd/transient/393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.timer: No such file or directory Feb 23 04:03:29 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed to open /run/systemd/transient/393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: No such file or directory Feb 23 04:03:29 localhost podman[112069]: 2026-02-23 09:03:29.165983452 +0000 UTC m=+0.116165695 container cleanup 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2026-01-12T22:36:40Z) Feb 23 04:03:29 localhost systemd[1]: libpod-conmon-393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.scope: Deactivated successfully. Feb 23 04:03:29 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.timer: Failed to open /run/systemd/transient/393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.timer: No such file or directory Feb 23 04:03:29 localhost systemd[1]: 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: Failed to open /run/systemd/transient/393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2.service: No such file or directory Feb 23 04:03:29 localhost podman[112083]: 2026-02-23 09:03:29.262982288 +0000 UTC m=+0.069126861 container cleanup 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=ovn_controller, release=1766032510, version=17.1.13, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 23 04:03:29 localhost podman[112083]: ovn_controller Feb 23 04:03:29 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Feb 23 04:03:29 localhost systemd[1]: Stopped ovn_controller container. Feb 23 04:03:29 localhost python3.9[112217]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:03:30 localhost systemd[1]: var-lib-containers-storage-overlay-f13f697d185a99a8a222763d80bff47873cb03d19648b0bcc96a8114f853f9bd-merged.mount: Deactivated successfully. Feb 23 04:03:30 localhost systemd[1]: Reloading. Feb 23 04:03:30 localhost systemd-sysv-generator[112320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:03:30 localhost systemd-rc-local-generator[112316]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:03:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:03:30 localhost podman[112326]: 2026-02-23 09:03:30.235563151 +0000 UTC m=+0.068847204 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, io.openshift.expose-services=, distribution-scope=public, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, ceph=True, name=rhceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 04:03:30 localhost podman[112326]: 2026-02-23 09:03:30.312499881 +0000 UTC m=+0.145783944 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 04:03:30 localhost systemd[1]: Stopping ovn_metadata_agent container... Feb 23 04:03:30 localhost systemd[1]: libpod-71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.scope: Deactivated successfully. Feb 23 04:03:30 localhost systemd[1]: libpod-71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.scope: Consumed 8.937s CPU time. Feb 23 04:03:30 localhost podman[112347]: 2026-02-23 09:03:30.979033532 +0000 UTC m=+0.648104457 container died 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, version=17.1.13, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 23 04:03:30 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.timer: Deactivated successfully. Feb 23 04:03:30 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879. Feb 23 04:03:31 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed to open /run/systemd/transient/71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: No such file or directory Feb 23 04:03:31 localhost systemd[1]: var-lib-containers-storage-overlay-add23e82cdf75351add1828e63e379f4d01e1b7178f4868cd8802b773b6e9f43-merged.mount: Deactivated successfully. Feb 23 04:03:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879-userdata-shm.mount: Deactivated successfully. Feb 23 04:03:31 localhost podman[112347]: 2026-02-23 09:03:31.038812446 +0000 UTC m=+0.707883371 container cleanup 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13) Feb 23 04:03:31 localhost podman[112347]: ovn_metadata_agent Feb 23 04:03:31 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.timer: Failed to open /run/systemd/transient/71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.timer: No such file or directory Feb 23 04:03:31 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed to open /run/systemd/transient/71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: No such file or directory Feb 23 04:03:31 localhost podman[112427]: 2026-02-23 09:03:31.071238251 +0000 UTC m=+0.081734058 container cleanup 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 23 04:03:31 localhost sshd[112454]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:31 localhost systemd[1]: libpod-conmon-71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.scope: Deactivated successfully. Feb 23 04:03:31 localhost podman[112468]: error opening file `/run/crun/71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879/status`: No such file or directory Feb 23 04:03:31 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.timer: Failed to open /run/systemd/transient/71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.timer: No such file or directory Feb 23 04:03:31 localhost systemd[1]: 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: Failed to open /run/systemd/transient/71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879.service: No such file or directory Feb 23 04:03:31 localhost podman[112455]: 2026-02-23 09:03:31.179013807 +0000 UTC m=+0.076955431 container cleanup 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public) Feb 23 04:03:31 localhost podman[112455]: ovn_metadata_agent Feb 23 04:03:31 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully. Feb 23 04:03:31 localhost systemd[1]: Stopped ovn_metadata_agent container. Feb 23 04:03:31 localhost python3.9[112581]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:03:31 localhost systemd[1]: Reloading. Feb 23 04:03:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33407 DF PROTO=TCP SPT=60144 DPT=9105 SEQ=3427276615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C7A040000000001030307) Feb 23 04:03:32 localhost systemd-rc-local-generator[112622]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:03:32 localhost systemd-sysv-generator[112626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:03:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:03:32 localhost systemd[1]: tmp-crun.yxy8wx.mount: Deactivated successfully. Feb 23 04:03:32 localhost sshd[112646]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33408 DF PROTO=TCP SPT=60144 DPT=9105 SEQ=3427276615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C7E030000000001030307) Feb 23 04:03:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33409 DF PROTO=TCP SPT=60144 DPT=9105 SEQ=3427276615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C86030000000001030307) Feb 23 04:03:35 localhost sshd[112663]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:38 localhost sshd[112665]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33410 DF PROTO=TCP SPT=60144 DPT=9105 SEQ=3427276615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C95C30000000001030307) Feb 23 04:03:40 localhost sshd[112667]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39555 DF PROTO=TCP SPT=52070 DPT=9101 SEQ=174507533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010C9E700000000001030307) Feb 23 04:03:43 localhost sshd[112669]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15896 DF PROTO=TCP SPT=54890 DPT=9102 SEQ=4198464806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010CA9830000000001030307) Feb 23 04:03:46 localhost sshd[112671]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33411 DF PROTO=TCP SPT=60144 DPT=9105 SEQ=3427276615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010CB5830000000001030307) Feb 23 04:03:47 localhost sshd[112673]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:48 localhost sshd[112675]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27974 DF PROTO=TCP SPT=56432 DPT=9100 SEQ=862516652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010CC4AE0000000001030307) Feb 23 04:03:51 localhost sshd[112677]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27976 DF PROTO=TCP SPT=56432 DPT=9100 SEQ=862516652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010CD0C30000000001030307) Feb 23 04:03:54 localhost sshd[112679]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:56 localhost sshd[112681]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:03:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39559 DF PROTO=TCP SPT=52070 DPT=9101 SEQ=174507533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010CDB830000000001030307) Feb 23 04:03:59 localhost sshd[112683]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:00 localhost sshd[112685]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33619 DF PROTO=TCP SPT=46170 DPT=9105 SEQ=484311425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010CEF340000000001030307) Feb 23 04:04:02 localhost sshd[112687]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33620 DF PROTO=TCP SPT=46170 DPT=9105 SEQ=484311425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010CF3440000000001030307) Feb 23 04:04:04 localhost sshd[112689]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33621 DF PROTO=TCP SPT=46170 DPT=9105 SEQ=484311425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010CFB440000000001030307) Feb 23 04:04:07 localhost sshd[112691]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33622 DF PROTO=TCP SPT=46170 DPT=9105 SEQ=484311425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D0B030000000001030307) Feb 23 04:04:09 localhost sshd[112693]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56338 DF PROTO=TCP SPT=36968 DPT=9101 SEQ=570798279 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D13A00000000001030307) Feb 23 04:04:11 localhost sshd[112695]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39495 DF PROTO=TCP SPT=44758 DPT=9882 SEQ=1944816451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D1F830000000001030307) Feb 23 04:04:14 localhost sshd[112697]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:17 localhost sshd[112699]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33623 DF PROTO=TCP SPT=46170 DPT=9105 SEQ=484311425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D2B830000000001030307) Feb 23 04:04:19 localhost sshd[112701]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53103 DF PROTO=TCP SPT=56112 DPT=9100 SEQ=4168170180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D39DF0000000001030307) Feb 23 04:04:22 localhost sshd[112703]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27979 DF PROTO=TCP SPT=56432 DPT=9100 SEQ=862516652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D41830000000001030307) Feb 23 04:04:24 localhost sshd[112705]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:26 localhost sshd[112707]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56342 DF PROTO=TCP SPT=36968 DPT=9101 SEQ=570798279 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D4F830000000001030307) Feb 23 04:04:27 localhost sshd[112709]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:29 localhost sshd[112711]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17925 DF PROTO=TCP SPT=55270 DPT=9105 SEQ=1000052688 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D64640000000001030307) Feb 23 04:04:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17926 DF PROTO=TCP SPT=55270 DPT=9105 SEQ=1000052688 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D68840000000001030307) Feb 23 04:04:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17927 DF PROTO=TCP SPT=55270 DPT=9105 SEQ=1000052688 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D70830000000001030307) Feb 23 04:04:35 localhost sshd[112774]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:04:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17928 DF PROTO=TCP SPT=55270 DPT=9105 SEQ=1000052688 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D80430000000001030307) Feb 23 04:04:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45960 DF PROTO=TCP SPT=43526 DPT=9101 SEQ=3794359530 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D88D00000000001030307) Feb 23 04:04:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2216 DF PROTO=TCP SPT=38956 DPT=9102 SEQ=139175737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010D93830000000001030307) Feb 23 04:04:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17929 DF PROTO=TCP SPT=55270 DPT=9105 SEQ=1000052688 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010DA1830000000001030307) Feb 23 04:04:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54414 DF PROTO=TCP SPT=50090 DPT=9100 SEQ=1032606909 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010DAF0E0000000001030307) Feb 23 04:04:52 localhost systemd[1]: session-36.scope: Deactivated successfully. Feb 23 04:04:52 localhost systemd[1]: session-36.scope: Consumed 18.134s CPU time. Feb 23 04:04:52 localhost systemd-logind[759]: Session 36 logged out. Waiting for processes to exit. Feb 23 04:04:52 localhost systemd-logind[759]: Removed session 36. Feb 23 04:04:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54416 DF PROTO=TCP SPT=50090 DPT=9100 SEQ=1032606909 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010DBB030000000001030307) Feb 23 04:04:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45964 DF PROTO=TCP SPT=43526 DPT=9101 SEQ=3794359530 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010DC5830000000001030307) Feb 23 04:05:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64008 DF PROTO=TCP SPT=54868 DPT=9105 SEQ=717041702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010DD9930000000001030307) Feb 23 04:05:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64009 DF PROTO=TCP SPT=54868 DPT=9105 SEQ=717041702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010DDD840000000001030307) Feb 23 04:05:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64010 DF PROTO=TCP SPT=54868 DPT=9105 SEQ=717041702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010DE5830000000001030307) Feb 23 04:05:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64011 DF PROTO=TCP SPT=54868 DPT=9105 SEQ=717041702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010DF5430000000001030307) Feb 23 04:05:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24380 DF PROTO=TCP SPT=51970 DPT=9101 SEQ=90091714 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010DFDFF0000000001030307) Feb 23 04:05:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20781 DF PROTO=TCP SPT=60140 DPT=9102 SEQ=3881316699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E09840000000001030307) Feb 23 04:05:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64012 DF PROTO=TCP SPT=54868 DPT=9105 SEQ=717041702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E15830000000001030307) Feb 23 04:05:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52370 DF PROTO=TCP SPT=58808 DPT=9100 SEQ=1892083000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E243F0000000001030307) Feb 23 04:05:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52372 DF PROTO=TCP SPT=58808 DPT=9100 SEQ=1892083000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E30440000000001030307) Feb 23 04:05:25 localhost sshd[112792]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:05:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24384 DF PROTO=TCP SPT=51970 DPT=9101 SEQ=90091714 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E39830000000001030307) Feb 23 04:05:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4671 DF PROTO=TCP SPT=41934 DPT=9105 SEQ=2105395543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E4EC40000000001030307) Feb 23 04:05:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4672 DF PROTO=TCP SPT=41934 DPT=9105 SEQ=2105395543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E52C40000000001030307) Feb 23 04:05:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4673 DF PROTO=TCP SPT=41934 DPT=9105 SEQ=2105395543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E5AC40000000001030307) Feb 23 04:05:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4674 DF PROTO=TCP SPT=41934 DPT=9105 SEQ=2105395543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E6A830000000001030307) Feb 23 04:05:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54276 DF PROTO=TCP SPT=60648 DPT=9101 SEQ=474499783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E73300000000001030307) Feb 23 04:05:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54278 DF PROTO=TCP SPT=60648 DPT=9101 SEQ=474499783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E7F430000000001030307) Feb 23 04:05:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4675 DF PROTO=TCP SPT=41934 DPT=9105 SEQ=2105395543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E8B840000000001030307) Feb 23 04:05:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12870 DF PROTO=TCP SPT=34966 DPT=9100 SEQ=3996633044 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010E996F0000000001030307) Feb 23 04:05:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12872 DF PROTO=TCP SPT=34966 DPT=9100 SEQ=3996633044 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010EA5830000000001030307) Feb 23 04:05:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54280 DF PROTO=TCP SPT=60648 DPT=9101 SEQ=474499783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010EAF830000000001030307) Feb 23 04:06:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57796 DF PROTO=TCP SPT=60114 DPT=9105 SEQ=1531911925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010EC3F30000000001030307) Feb 23 04:06:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57797 DF PROTO=TCP SPT=60114 DPT=9105 SEQ=1531911925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010EC8030000000001030307) Feb 23 04:06:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57798 DF PROTO=TCP SPT=60114 DPT=9105 SEQ=1531911925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010ED0030000000001030307) Feb 23 04:06:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57799 DF PROTO=TCP SPT=60114 DPT=9105 SEQ=1531911925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010EDFC30000000001030307) Feb 23 04:06:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38581 DF PROTO=TCP SPT=46844 DPT=9101 SEQ=2129745096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010EE8600000000001030307) Feb 23 04:06:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46208 DF PROTO=TCP SPT=39742 DPT=9102 SEQ=227115602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010EF3830000000001030307) Feb 23 04:06:14 localhost sshd[112871]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:06:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57800 DF PROTO=TCP SPT=60114 DPT=9105 SEQ=1531911925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010EFF830000000001030307) Feb 23 04:06:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10150 DF PROTO=TCP SPT=49678 DPT=9100 SEQ=3757016576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F0E9F0000000001030307) Feb 23 04:06:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10152 DF PROTO=TCP SPT=49678 DPT=9100 SEQ=3757016576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F1AC30000000001030307) Feb 23 04:06:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38585 DF PROTO=TCP SPT=46844 DPT=9101 SEQ=2129745096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F25830000000001030307) Feb 23 04:06:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12891 DF PROTO=TCP SPT=33342 DPT=9105 SEQ=2824952223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F39230000000001030307) Feb 23 04:06:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12892 DF PROTO=TCP SPT=33342 DPT=9105 SEQ=2824952223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F3D440000000001030307) Feb 23 04:06:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12893 DF PROTO=TCP SPT=33342 DPT=9105 SEQ=2824952223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F45430000000001030307) Feb 23 04:06:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12894 DF PROTO=TCP SPT=33342 DPT=9105 SEQ=2824952223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F55040000000001030307) Feb 23 04:06:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53990 DF PROTO=TCP SPT=55610 DPT=9101 SEQ=2172724650 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F5D900000000001030307) Feb 23 04:06:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58892 DF PROTO=TCP SPT=60684 DPT=9882 SEQ=2904429411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F69830000000001030307) Feb 23 04:06:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12895 DF PROTO=TCP SPT=33342 DPT=9105 SEQ=2824952223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F75830000000001030307) Feb 23 04:06:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55399 DF PROTO=TCP SPT=38874 DPT=9100 SEQ=3451038720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F83CE0000000001030307) Feb 23 04:06:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10155 DF PROTO=TCP SPT=49678 DPT=9100 SEQ=3757016576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F8B830000000001030307) Feb 23 04:06:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53994 DF PROTO=TCP SPT=55610 DPT=9101 SEQ=2172724650 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010F99830000000001030307) Feb 23 04:07:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31964 DF PROTO=TCP SPT=36610 DPT=9105 SEQ=1202179587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010FAE530000000001030307) Feb 23 04:07:02 localhost sshd[112950]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:07:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31965 DF PROTO=TCP SPT=36610 DPT=9105 SEQ=1202179587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010FB2430000000001030307) Feb 23 04:07:03 localhost sshd[112952]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:07:03 localhost systemd-logind[759]: New session 37 of user zuul. Feb 23 04:07:03 localhost systemd[1]: Started Session 37 of User zuul. Feb 23 04:07:03 localhost python3.9[113033]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:04 localhost python3.9[113125]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:04 localhost python3.9[113217]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31966 DF PROTO=TCP SPT=36610 DPT=9105 SEQ=1202179587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010FBA430000000001030307) Feb 23 04:07:05 localhost python3.9[113309]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:05 localhost python3.9[113401]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:06 localhost python3.9[113493]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:07 localhost python3.9[113585]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:07 localhost python3.9[113677]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:08 localhost python3.9[113769]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:08 localhost python3.9[113861]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31967 DF PROTO=TCP SPT=36610 DPT=9105 SEQ=1202179587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010FCA030000000001030307) Feb 23 04:07:09 localhost python3.9[113953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:09 localhost python3.9[114045]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:10 localhost python3.9[114137]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:11 localhost python3.9[114229]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58478 DF PROTO=TCP SPT=55036 DPT=9101 SEQ=470604721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010FD2C00000000001030307) Feb 23 04:07:11 localhost python3.9[114321]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:12 localhost python3.9[114413]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:12 localhost python3.9[114505]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:13 localhost python3.9[114597]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52319 DF PROTO=TCP SPT=37846 DPT=9102 SEQ=2230983620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010FDD830000000001030307) Feb 23 04:07:14 localhost sshd[114690]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:07:14 localhost python3.9[114689]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:14 localhost python3.9[114783]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:15 localhost python3.9[114875]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:16 localhost python3.9[114967]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:16 localhost python3.9[115059]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31968 DF PROTO=TCP SPT=36610 DPT=9105 SEQ=1202179587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010FE9830000000001030307) Feb 23 04:07:17 localhost python3.9[115151]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:18 localhost python3.9[115243]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:18 localhost python3.9[115335]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:19 localhost python3.9[115427]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:20 localhost python3.9[115519]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:20 localhost python3.9[115611]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21410 DF PROTO=TCP SPT=49338 DPT=9100 SEQ=3830866818 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A010FF8FF0000000001030307) Feb 23 04:07:21 localhost python3.9[115703]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:21 localhost python3.9[115795]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:22 localhost python3.9[115887]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:22 localhost python3.9[115979]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:23 localhost python3.9[116071]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21412 DF PROTO=TCP SPT=49338 DPT=9100 SEQ=3830866818 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011005030000000001030307) Feb 23 04:07:24 localhost python3.9[116163]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:25 localhost python3.9[116255]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:25 localhost python3.9[116347]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:26 localhost python3.9[116439]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:26 localhost python3.9[116531]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58482 DF PROTO=TCP SPT=55036 DPT=9101 SEQ=470604721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01100F830000000001030307) Feb 23 04:07:27 localhost python3.9[116623]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:28 localhost python3.9[116715]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:28 localhost python3.9[116807]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:07:30 localhost python3.9[116899]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:31 localhost python3.9[116991]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 23 04:07:31 localhost python3.9[117083]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:07:31 localhost systemd[1]: Reloading. Feb 23 04:07:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28902 DF PROTO=TCP SPT=56570 DPT=9105 SEQ=63153847 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011023830000000001030307) Feb 23 04:07:32 localhost systemd-sysv-generator[117115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:07:32 localhost systemd-rc-local-generator[117111]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:07:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:07:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28903 DF PROTO=TCP SPT=56570 DPT=9105 SEQ=63153847 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011027840000000001030307) Feb 23 04:07:33 localhost python3.9[117211]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:33 localhost python3.9[117304]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:34 localhost python3.9[117397]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:34 localhost python3.9[117490]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28904 DF PROTO=TCP SPT=56570 DPT=9105 SEQ=63153847 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01102F830000000001030307) Feb 23 04:07:36 localhost python3.9[117583]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:36 localhost python3.9[117676]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:37 localhost python3.9[117769]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:37 localhost python3.9[117862]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:38 localhost python3.9[117955]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:38 localhost sshd[117971]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:07:38 localhost python3.9[118050]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28905 DF PROTO=TCP SPT=56570 DPT=9105 SEQ=63153847 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01103F430000000001030307) Feb 23 04:07:39 localhost python3.9[118143]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:40 localhost python3.9[118281]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:40 localhost python3.9[118391]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:41 localhost python3.9[118484]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4963 DF PROTO=TCP SPT=40852 DPT=9101 SEQ=2704949024 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011047F00000000001030307) Feb 23 04:07:41 localhost python3.9[118577]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:42 localhost python3.9[118670]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:42 localhost python3.9[118763]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:43 localhost python3.9[118856]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:43 localhost python3.9[118949]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38619 DF PROTO=TCP SPT=41084 DPT=9102 SEQ=351067401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011053830000000001030307) Feb 23 04:07:44 localhost python3.9[119057]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:45 localhost python3.9[119150]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:07:46 localhost systemd[1]: session-37.scope: Deactivated successfully. Feb 23 04:07:46 localhost systemd[1]: session-37.scope: Consumed 29.592s CPU time. Feb 23 04:07:46 localhost systemd-logind[759]: Session 37 logged out. Waiting for processes to exit. Feb 23 04:07:46 localhost systemd-logind[759]: Removed session 37. Feb 23 04:07:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28906 DF PROTO=TCP SPT=56570 DPT=9105 SEQ=63153847 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01105F830000000001030307) Feb 23 04:07:49 localhost sshd[119166]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:07:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19808 DF PROTO=TCP SPT=44396 DPT=9100 SEQ=1796257586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01106E2F0000000001030307) Feb 23 04:07:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19810 DF PROTO=TCP SPT=44396 DPT=9100 SEQ=1796257586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01107A430000000001030307) Feb 23 04:07:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4967 DF PROTO=TCP SPT=40852 DPT=9101 SEQ=2704949024 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011083840000000001030307) Feb 23 04:08:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64094 DF PROTO=TCP SPT=59142 DPT=9105 SEQ=1514251509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011098B40000000001030307) Feb 23 04:08:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64095 DF PROTO=TCP SPT=59142 DPT=9105 SEQ=1514251509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01109CC30000000001030307) Feb 23 04:08:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64096 DF PROTO=TCP SPT=59142 DPT=9105 SEQ=1514251509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0110A4C30000000001030307) Feb 23 04:08:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64097 DF PROTO=TCP SPT=59142 DPT=9105 SEQ=1514251509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0110B4830000000001030307) Feb 23 04:08:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14317 DF PROTO=TCP SPT=33966 DPT=9101 SEQ=313913227 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0110BD200000000001030307) Feb 23 04:08:12 localhost sshd[119168]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:08:12 localhost systemd-logind[759]: New session 38 of user zuul. Feb 23 04:08:12 localhost systemd[1]: Started Session 38 of User zuul. Feb 23 04:08:12 localhost python3.9[119261]: ansible-ansible.legacy.ping Invoked with data=pong Feb 23 04:08:13 localhost python3.9[119365]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:08:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14319 DF PROTO=TCP SPT=33966 DPT=9101 SEQ=313913227 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0110C9430000000001030307) Feb 23 04:08:14 localhost python3.9[119457]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:08:16 localhost python3.9[119550]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:08:16 localhost python3.9[119642]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:08:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64098 DF PROTO=TCP SPT=59142 DPT=9105 SEQ=1514251509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0110D5840000000001030307) Feb 23 04:08:17 localhost python3.9[119734]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:08:18 localhost python3.9[119807]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771837697.116773-173-36611719020049/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:08:19 localhost python3.9[119899]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:08:20 localhost python3.9[119995]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:08:20 localhost python3.9[120087]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:08:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2833 DF PROTO=TCP SPT=33226 DPT=9100 SEQ=1092118018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0110E35E0000000001030307) Feb 23 04:08:21 localhost python3.9[120177]: ansible-ansible.builtin.service_facts Invoked Feb 23 04:08:21 localhost network[120194]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 04:08:21 localhost network[120195]: 'network-scripts' will be removed from distribution in near future. Feb 23 04:08:21 localhost network[120196]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 04:08:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:08:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2835 DF PROTO=TCP SPT=33226 DPT=9100 SEQ=1092118018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0110EF840000000001030307) Feb 23 04:08:25 localhost python3.9[120394]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:08:26 localhost python3.9[120484]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:08:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14321 DF PROTO=TCP SPT=33966 DPT=9101 SEQ=313913227 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0110F9830000000001030307) Feb 23 04:08:27 localhost python3.9[120580]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:08:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64305 DF PROTO=TCP SPT=38668 DPT=9105 SEQ=821730353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01110DE30000000001030307) Feb 23 04:08:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64306 DF PROTO=TCP SPT=38668 DPT=9105 SEQ=821730353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011112040000000001030307) Feb 23 04:08:34 localhost sshd[120612]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:08:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64307 DF PROTO=TCP SPT=38668 DPT=9105 SEQ=821730353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01111A040000000001030307) Feb 23 04:08:36 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 23 04:08:36 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 23 04:08:36 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 23 04:08:36 localhost systemd[1]: sshd.service: Consumed 5.973s CPU time. Feb 23 04:08:36 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 23 04:08:36 localhost systemd[1]: Stopping sshd-keygen.target... Feb 23 04:08:36 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 04:08:36 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 04:08:36 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 04:08:36 localhost systemd[1]: Reached target sshd-keygen.target. Feb 23 04:08:36 localhost systemd[1]: Starting OpenSSH server daemon... Feb 23 04:08:36 localhost sshd[120626]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:08:36 localhost systemd[1]: Started OpenSSH server daemon. Feb 23 04:08:37 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 04:08:37 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 04:08:37 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 04:08:37 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 04:08:37 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 04:08:37 localhost systemd[1]: run-r2de9b45f0763413b8b2931f9c7828546.service: Deactivated successfully. Feb 23 04:08:37 localhost systemd[1]: run-ra9447f4264a5426b9a643c2855973106.service: Deactivated successfully. Feb 23 04:08:38 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 23 04:08:38 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 23 04:08:38 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 23 04:08:38 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 23 04:08:38 localhost systemd[1]: Stopping sshd-keygen.target... Feb 23 04:08:38 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 04:08:38 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 04:08:38 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 04:08:38 localhost systemd[1]: Reached target sshd-keygen.target. Feb 23 04:08:38 localhost systemd[1]: Starting OpenSSH server daemon... Feb 23 04:08:38 localhost sshd[121016]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:08:38 localhost systemd[1]: Started OpenSSH server daemon. Feb 23 04:08:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64308 DF PROTO=TCP SPT=38668 DPT=9105 SEQ=821730353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011129C40000000001030307) Feb 23 04:08:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9545 DF PROTO=TCP SPT=49048 DPT=9101 SEQ=3557276017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011132510000000001030307) Feb 23 04:08:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13538 DF PROTO=TCP SPT=35376 DPT=9102 SEQ=589577964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01113D830000000001030307) Feb 23 04:08:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64309 DF PROTO=TCP SPT=38668 DPT=9105 SEQ=821730353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011149830000000001030307) Feb 23 04:08:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47719 DF PROTO=TCP SPT=49638 DPT=9100 SEQ=3744243151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0111588F0000000001030307) Feb 23 04:08:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47721 DF PROTO=TCP SPT=49638 DPT=9100 SEQ=3744243151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011164830000000001030307) Feb 23 04:08:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9549 DF PROTO=TCP SPT=49048 DPT=9101 SEQ=3557276017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01116D840000000001030307) Feb 23 04:09:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:09:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5036 writes, 22K keys, 5036 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5036 writes, 634 syncs, 7.94 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 04:09:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11227 DF PROTO=TCP SPT=33724 DPT=9105 SEQ=1726832263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011183130000000001030307) Feb 23 04:09:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11228 DF PROTO=TCP SPT=33724 DPT=9105 SEQ=1726832263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011187030000000001030307) Feb 23 04:09:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:09:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5650 writes, 24K keys, 5650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5650 writes, 811 syncs, 6.97 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 04:09:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11229 DF PROTO=TCP SPT=33724 DPT=9105 SEQ=1726832263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01118F030000000001030307) Feb 23 04:09:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11230 DF PROTO=TCP SPT=33724 DPT=9105 SEQ=1726832263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01119EC30000000001030307) Feb 23 04:09:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63210 DF PROTO=TCP SPT=53100 DPT=9101 SEQ=3403476543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0111A7800000000001030307) Feb 23 04:09:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19182 DF PROTO=TCP SPT=39492 DPT=9882 SEQ=3720131981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0111B3830000000001030307) Feb 23 04:09:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11231 DF PROTO=TCP SPT=33724 DPT=9105 SEQ=1726832263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0111BF830000000001030307) Feb 23 04:09:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15737 DF PROTO=TCP SPT=39618 DPT=9100 SEQ=915702889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0111CDBF0000000001030307) Feb 23 04:09:21 localhost sshd[121277]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:09:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47724 DF PROTO=TCP SPT=49638 DPT=9100 SEQ=3744243151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0111D5830000000001030307) Feb 23 04:09:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63214 DF PROTO=TCP SPT=53100 DPT=9101 SEQ=3403476543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0111E3830000000001030307) Feb 23 04:09:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37048 DF PROTO=TCP SPT=59964 DPT=9105 SEQ=2882412324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0111F8440000000001030307) Feb 23 04:09:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37049 DF PROTO=TCP SPT=59964 DPT=9105 SEQ=2882412324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0111FC430000000001030307) Feb 23 04:09:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37050 DF PROTO=TCP SPT=59964 DPT=9105 SEQ=2882412324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011204430000000001030307) Feb 23 04:09:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37051 DF PROTO=TCP SPT=59964 DPT=9105 SEQ=2882412324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011214030000000001030307) Feb 23 04:09:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62069 DF PROTO=TCP SPT=56512 DPT=9101 SEQ=848452390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01121CB00000000001030307) Feb 23 04:09:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8217 DF PROTO=TCP SPT=51692 DPT=9102 SEQ=2115389531 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011227830000000001030307) Feb 23 04:09:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37052 DF PROTO=TCP SPT=59964 DPT=9105 SEQ=2882412324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011233830000000001030307) Feb 23 04:09:50 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=15 res=1 Feb 23 04:09:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1652 DF PROTO=TCP SPT=56166 DPT=9100 SEQ=2901093709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011242EF0000000001030307) Feb 23 04:09:51 localhost podman[121648]: Feb 23 04:09:51 localhost podman[121648]: 2026-02-23 09:09:51.361819905 +0000 UTC m=+0.073999141 container create d99eeafdacad0efd9650c7d1c58d4c4f34c53ed13483b4ae7e8bbac192ae270d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_volhard, architecture=x86_64, ceph=True, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Feb 23 04:09:51 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=16 res=1 Feb 23 04:09:51 localhost systemd[1]: Started libpod-conmon-d99eeafdacad0efd9650c7d1c58d4c4f34c53ed13483b4ae7e8bbac192ae270d.scope. Feb 23 04:09:51 localhost podman[121648]: 2026-02-23 09:09:51.332561818 +0000 UTC m=+0.044741054 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:09:51 localhost systemd[1]: Started libcrun container. Feb 23 04:09:51 localhost podman[121648]: 2026-02-23 09:09:51.462169134 +0000 UTC m=+0.174348380 container init d99eeafdacad0efd9650c7d1c58d4c4f34c53ed13483b4ae7e8bbac192ae270d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_volhard, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.42.2, release=1770267347, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:09:51 localhost podman[121648]: 2026-02-23 09:09:51.472001226 +0000 UTC m=+0.184180462 container start d99eeafdacad0efd9650c7d1c58d4c4f34c53ed13483b4ae7e8bbac192ae270d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_volhard, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 04:09:51 localhost podman[121648]: 2026-02-23 09:09:51.472271094 +0000 UTC m=+0.184450330 container attach d99eeafdacad0efd9650c7d1c58d4c4f34c53ed13483b4ae7e8bbac192ae270d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_volhard, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.buildah.version=1.42.2, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 04:09:51 localhost thirsty_volhard[121706]: 167 167 Feb 23 04:09:51 localhost systemd[1]: libpod-d99eeafdacad0efd9650c7d1c58d4c4f34c53ed13483b4ae7e8bbac192ae270d.scope: Deactivated successfully. Feb 23 04:09:51 localhost podman[121648]: 2026-02-23 09:09:51.476260606 +0000 UTC m=+0.188439882 container died d99eeafdacad0efd9650c7d1c58d4c4f34c53ed13483b4ae7e8bbac192ae270d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_volhard, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_BRANCH=main, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:09:51 localhost podman[121712]: 2026-02-23 09:09:51.571400296 +0000 UTC m=+0.085899317 container remove d99eeafdacad0efd9650c7d1c58d4c4f34c53ed13483b4ae7e8bbac192ae270d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_volhard, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, RELEASE=main, distribution-scope=public, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph) Feb 23 04:09:51 localhost systemd[1]: libpod-conmon-d99eeafdacad0efd9650c7d1c58d4c4f34c53ed13483b4ae7e8bbac192ae270d.scope: Deactivated successfully. Feb 23 04:09:51 localhost kernel: SELinux: Converting 2741 SID table entries... Feb 23 04:09:51 localhost podman[121780]: Feb 23 04:09:51 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 04:09:51 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 04:09:51 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 04:09:51 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 04:09:51 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 04:09:51 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 04:09:51 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 04:09:51 localhost podman[121780]: 2026-02-23 09:09:51.768451471 +0000 UTC m=+0.059973091 container create bea7fdf75d5f8da2c6f2bdcc1398dae7998bf917ddd2877b711a30b4115f18b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_blackwell, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main) Feb 23 04:09:51 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=17 res=1 Feb 23 04:09:51 localhost systemd[1]: Started libpod-conmon-bea7fdf75d5f8da2c6f2bdcc1398dae7998bf917ddd2877b711a30b4115f18b9.scope. Feb 23 04:09:51 localhost systemd[1]: Started libcrun container. Feb 23 04:09:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df9181b955ef76b798eac4d2e536adbd57dd029ead7fadab5be0de3e8ddb4125/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 23 04:09:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df9181b955ef76b798eac4d2e536adbd57dd029ead7fadab5be0de3e8ddb4125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 04:09:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df9181b955ef76b798eac4d2e536adbd57dd029ead7fadab5be0de3e8ddb4125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 04:09:51 localhost podman[121780]: 2026-02-23 09:09:51.828095492 +0000 UTC m=+0.119617112 container init bea7fdf75d5f8da2c6f2bdcc1398dae7998bf917ddd2877b711a30b4115f18b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_blackwell, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, name=rhceph, vcs-type=git, version=7, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:09:51 localhost podman[121780]: 2026-02-23 09:09:51.836998735 +0000 UTC m=+0.128520345 container start bea7fdf75d5f8da2c6f2bdcc1398dae7998bf917ddd2877b711a30b4115f18b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_blackwell, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, ceph=True, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, build-date=2026-02-09T10:25:24Z, vcs-type=git, version=7, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main) Feb 23 04:09:51 localhost podman[121780]: 2026-02-23 09:09:51.837260153 +0000 UTC m=+0.128781813 container attach bea7fdf75d5f8da2c6f2bdcc1398dae7998bf917ddd2877b711a30b4115f18b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_blackwell, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, distribution-scope=public, maintainer=Guillaume Abrioux , version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:09:51 localhost podman[121780]: 2026-02-23 09:09:51.752443811 +0000 UTC m=+0.043965421 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:09:52 localhost systemd[1]: tmp-crun.U4hdGi.mount: Deactivated successfully. Feb 23 04:09:52 localhost systemd[1]: var-lib-containers-storage-overlay-5e6920265b4fdbd612d02572f38ea36fa448ca7a1864cdd315fbb9316fffa138-merged.mount: Deactivated successfully. Feb 23 04:09:52 localhost funny_blackwell[121797]: [ Feb 23 04:09:52 localhost funny_blackwell[121797]: { Feb 23 04:09:52 localhost funny_blackwell[121797]: "available": false, Feb 23 04:09:52 localhost funny_blackwell[121797]: "ceph_device": false, Feb 23 04:09:52 localhost funny_blackwell[121797]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 23 04:09:52 localhost funny_blackwell[121797]: "lsm_data": {}, Feb 23 04:09:52 localhost funny_blackwell[121797]: "lvs": [], Feb 23 04:09:52 localhost funny_blackwell[121797]: "path": "/dev/sr0", Feb 23 04:09:52 localhost funny_blackwell[121797]: "rejected_reasons": [ Feb 23 04:09:52 localhost funny_blackwell[121797]: "Insufficient space (<5GB)", Feb 23 04:09:52 localhost funny_blackwell[121797]: "Has a FileSystem" Feb 23 04:09:52 localhost funny_blackwell[121797]: ], Feb 23 04:09:52 localhost funny_blackwell[121797]: "sys_api": { Feb 23 04:09:52 localhost funny_blackwell[121797]: "actuators": null, Feb 23 04:09:52 localhost funny_blackwell[121797]: "device_nodes": "sr0", Feb 23 04:09:52 localhost funny_blackwell[121797]: "human_readable_size": "482.00 KB", Feb 23 04:09:52 localhost funny_blackwell[121797]: "id_bus": "ata", Feb 23 04:09:52 localhost funny_blackwell[121797]: "model": "QEMU DVD-ROM", Feb 23 04:09:52 localhost funny_blackwell[121797]: "nr_requests": "2", Feb 23 04:09:52 localhost funny_blackwell[121797]: "partitions": {}, Feb 23 04:09:52 localhost funny_blackwell[121797]: "path": "/dev/sr0", Feb 23 04:09:52 localhost funny_blackwell[121797]: "removable": "1", Feb 23 04:09:52 localhost funny_blackwell[121797]: "rev": "2.5+", Feb 23 04:09:52 localhost funny_blackwell[121797]: "ro": "0", Feb 23 04:09:52 localhost funny_blackwell[121797]: "rotational": "1", Feb 23 04:09:52 localhost funny_blackwell[121797]: "sas_address": "", Feb 23 04:09:52 localhost funny_blackwell[121797]: "sas_device_handle": "", Feb 23 04:09:52 localhost funny_blackwell[121797]: "scheduler_mode": "mq-deadline", Feb 23 04:09:52 localhost funny_blackwell[121797]: "sectors": 0, Feb 23 04:09:52 localhost funny_blackwell[121797]: "sectorsize": "2048", Feb 23 04:09:52 localhost funny_blackwell[121797]: "size": 493568.0, Feb 23 04:09:52 localhost funny_blackwell[121797]: "support_discard": "0", Feb 23 04:09:52 localhost funny_blackwell[121797]: "type": "disk", Feb 23 04:09:52 localhost funny_blackwell[121797]: "vendor": "QEMU" Feb 23 04:09:52 localhost funny_blackwell[121797]: } Feb 23 04:09:52 localhost funny_blackwell[121797]: } Feb 23 04:09:52 localhost funny_blackwell[121797]: ] Feb 23 04:09:52 localhost systemd[1]: libpod-bea7fdf75d5f8da2c6f2bdcc1398dae7998bf917ddd2877b711a30b4115f18b9.scope: Deactivated successfully. Feb 23 04:09:52 localhost podman[121780]: 2026-02-23 09:09:52.698030953 +0000 UTC m=+0.989552573 container died bea7fdf75d5f8da2c6f2bdcc1398dae7998bf917ddd2877b711a30b4115f18b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_blackwell, version=7, GIT_BRANCH=main, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public) Feb 23 04:09:52 localhost systemd[1]: tmp-crun.nYnX5d.mount: Deactivated successfully. Feb 23 04:09:52 localhost systemd[1]: var-lib-containers-storage-overlay-df9181b955ef76b798eac4d2e536adbd57dd029ead7fadab5be0de3e8ddb4125-merged.mount: Deactivated successfully. Feb 23 04:09:52 localhost podman[123064]: 2026-02-23 09:09:52.78591236 +0000 UTC m=+0.080064348 container remove bea7fdf75d5f8da2c6f2bdcc1398dae7998bf917ddd2877b711a30b4115f18b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_blackwell, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z) Feb 23 04:09:52 localhost systemd[1]: libpod-conmon-bea7fdf75d5f8da2c6f2bdcc1398dae7998bf917ddd2877b711a30b4115f18b9.scope: Deactivated successfully. Feb 23 04:09:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1654 DF PROTO=TCP SPT=56166 DPT=9100 SEQ=2901093709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01124F030000000001030307) Feb 23 04:09:54 localhost python3.9[123186]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:09:55 localhost python3.9[123278]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:09:56 localhost python3.9[123351]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771837795.0134132-422-37112133169597/.source.fact _original_basename=.5bsub3_a follow=False checksum=d686dccd4d8cd0883f3e3bc0a6f664c73290ba68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:09:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62073 DF PROTO=TCP SPT=56512 DPT=9101 SEQ=848452390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011259840000000001030307) Feb 23 04:09:57 localhost python3.9[123441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:09:58 localhost python3.9[123539]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:09:59 localhost python3.9[123593]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:10:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29595 DF PROTO=TCP SPT=51736 DPT=9105 SEQ=366270889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01126D730000000001030307) Feb 23 04:10:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29596 DF PROTO=TCP SPT=51736 DPT=9105 SEQ=366270889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011271830000000001030307) Feb 23 04:10:03 localhost systemd[1]: Reloading. Feb 23 04:10:03 localhost systemd-rc-local-generator[123625]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:10:03 localhost systemd-sysv-generator[123632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:10:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:10:03 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 04:10:04 localhost python3.9[123733]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:10:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29597 DF PROTO=TCP SPT=51736 DPT=9105 SEQ=366270889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011279830000000001030307) Feb 23 04:10:06 localhost python3.9[123972]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Feb 23 04:10:07 localhost python3.9[124064]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Feb 23 04:10:08 localhost sshd[124080]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:10:08 localhost python3.9[124159]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:10:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29598 DF PROTO=TCP SPT=51736 DPT=9105 SEQ=366270889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011289440000000001030307) Feb 23 04:10:09 localhost python3.9[124251]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Feb 23 04:10:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59285 DF PROTO=TCP SPT=51026 DPT=9101 SEQ=2848518580 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011291E00000000001030307) Feb 23 04:10:11 localhost python3.9[124343]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:10:12 localhost python3.9[124435]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:10:12 localhost python3.9[124508]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771837811.5645108-746-266763324582282/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=704aa8bdbea515a6da96c2b63bce412faf6bceda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:10:13 localhost python3.9[124600]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:10:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45183 DF PROTO=TCP SPT=56286 DPT=9882 SEQ=4103446064 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01129D830000000001030307) Feb 23 04:10:15 localhost python3.9[124694]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Feb 23 04:10:15 localhost python3.9[124787]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Feb 23 04:10:16 localhost python3.9[124880]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Feb 23 04:10:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29599 DF PROTO=TCP SPT=51736 DPT=9105 SEQ=366270889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0112A9840000000001030307) Feb 23 04:10:17 localhost python3.9[124978]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Feb 23 04:10:18 localhost sshd[125072]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:10:18 localhost python3.9[125071]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:10:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61855 DF PROTO=TCP SPT=36618 DPT=9100 SEQ=3203406745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0112B81F0000000001030307) Feb 23 04:10:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1657 DF PROTO=TCP SPT=56166 DPT=9100 SEQ=2901093709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0112BF840000000001030307) Feb 23 04:10:24 localhost python3.9[125167]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:10:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59289 DF PROTO=TCP SPT=51026 DPT=9101 SEQ=2848518580 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0112CD830000000001030307) Feb 23 04:10:28 localhost python3.9[125260]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:10:28 localhost python3.9[125333]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771837827.3998797-1019-61563612437642/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 23 04:10:29 localhost python3.9[125425]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:10:29 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 23 04:10:29 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 04:10:29 localhost systemd[1]: Stopping Load Kernel Modules... Feb 23 04:10:29 localhost systemd[1]: Starting Load Kernel Modules... Feb 23 04:10:29 localhost systemd-modules-load[125429]: Module 'msr' is built in Feb 23 04:10:29 localhost systemd[1]: Finished Load Kernel Modules. Feb 23 04:10:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45890 DF PROTO=TCP SPT=59420 DPT=9105 SEQ=1356269083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0112E2A40000000001030307) Feb 23 04:10:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45891 DF PROTO=TCP SPT=59420 DPT=9105 SEQ=1356269083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0112E6C30000000001030307) Feb 23 04:10:33 localhost python3.9[125522]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:10:34 localhost python3.9[125595]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771837833.350209-1088-150899028670173/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 23 04:10:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45892 DF PROTO=TCP SPT=59420 DPT=9105 SEQ=1356269083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0112EEC30000000001030307) Feb 23 04:10:35 localhost python3.9[125687]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:10:38 localhost sshd[125704]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:10:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45893 DF PROTO=TCP SPT=59420 DPT=9105 SEQ=1356269083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0112FE840000000001030307) Feb 23 04:10:39 localhost python3.9[125781]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:10:40 localhost python3.9[125873]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Feb 23 04:10:40 localhost python3.9[125963]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:10:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28894 DF PROTO=TCP SPT=60464 DPT=9101 SEQ=3313298060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011307100000000001030307) Feb 23 04:10:41 localhost python3.9[126055]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:10:42 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Feb 23 04:10:42 localhost systemd[1]: tuned.service: Deactivated successfully. Feb 23 04:10:42 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Feb 23 04:10:42 localhost systemd[1]: tuned.service: Consumed 2.297s CPU time, no IO. Feb 23 04:10:42 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 23 04:10:43 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 23 04:10:44 localhost sshd[126158]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:10:44 localhost python3.9[126157]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Feb 23 04:10:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28896 DF PROTO=TCP SPT=60464 DPT=9101 SEQ=3313298060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011313040000000001030307) Feb 23 04:10:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45894 DF PROTO=TCP SPT=59420 DPT=9105 SEQ=1356269083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01131F830000000001030307) Feb 23 04:10:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10100 DF PROTO=TCP SPT=60464 DPT=9100 SEQ=3998446541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01132D4F0000000001030307) Feb 23 04:10:51 localhost python3.9[126251]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:10:51 localhost systemd[1]: Reloading. Feb 23 04:10:51 localhost systemd-sysv-generator[126278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:10:51 localhost systemd-rc-local-generator[126274]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:10:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:10:52 localhost python3.9[126381]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:10:52 localhost systemd[1]: Reloading. Feb 23 04:10:52 localhost systemd-rc-local-generator[126410]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:10:52 localhost systemd-sysv-generator[126414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:10:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:10:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10102 DF PROTO=TCP SPT=60464 DPT=9100 SEQ=3998446541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011339430000000001030307) Feb 23 04:10:55 localhost python3.9[126572]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:10:56 localhost sshd[126666]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:10:56 localhost python3.9[126665]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:10:56 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Feb 23 04:10:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28898 DF PROTO=TCP SPT=60464 DPT=9101 SEQ=3313298060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011343830000000001030307) Feb 23 04:10:56 localhost python3.9[126760]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:10:58 localhost python3.9[126874]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:10:59 localhost python3.9[126967]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:10:59 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 23 04:10:59 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 23 04:10:59 localhost systemd[1]: Stopping Apply Kernel Variables... Feb 23 04:10:59 localhost systemd[1]: Starting Apply Kernel Variables... Feb 23 04:10:59 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 23 04:10:59 localhost systemd[1]: Finished Apply Kernel Variables. Feb 23 04:11:00 localhost systemd[1]: session-38.scope: Deactivated successfully. Feb 23 04:11:00 localhost systemd[1]: session-38.scope: Consumed 1min 59.030s CPU time. Feb 23 04:11:00 localhost systemd-logind[759]: Session 38 logged out. Waiting for processes to exit. Feb 23 04:11:00 localhost systemd-logind[759]: Removed session 38. Feb 23 04:11:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48186 DF PROTO=TCP SPT=53738 DPT=9105 SEQ=2407725235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011357D30000000001030307) Feb 23 04:11:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48187 DF PROTO=TCP SPT=53738 DPT=9105 SEQ=2407725235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01135BC30000000001030307) Feb 23 04:11:04 localhost sshd[126987]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:11:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48188 DF PROTO=TCP SPT=53738 DPT=9105 SEQ=2407725235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011363C30000000001030307) Feb 23 04:11:05 localhost systemd-logind[759]: New session 39 of user zuul. Feb 23 04:11:05 localhost systemd[1]: Started Session 39 of User zuul. Feb 23 04:11:06 localhost python3.9[127080]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:11:07 localhost python3.9[127174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:11:08 localhost python3.9[127270]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:11:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48189 DF PROTO=TCP SPT=53738 DPT=9105 SEQ=2407725235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011373830000000001030307) Feb 23 04:11:09 localhost python3.9[127361]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:11:10 localhost python3.9[127457]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:11:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15042 DF PROTO=TCP SPT=43466 DPT=9101 SEQ=4280001095 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01137C3F0000000001030307) Feb 23 04:11:11 localhost python3.9[127511]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:11:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48921 DF PROTO=TCP SPT=48094 DPT=9102 SEQ=507412914 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011387830000000001030307) Feb 23 04:11:15 localhost python3.9[127605]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:11:16 localhost python3.9[127752]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:11:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48190 DF PROTO=TCP SPT=53738 DPT=9105 SEQ=2407725235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011393830000000001030307) Feb 23 04:11:17 localhost python3.9[127844]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:11:18 localhost python3.9[127949]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:11:18 localhost python3.9[127997]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:11:19 localhost python3.9[128089]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:11:20 localhost python3.9[128162]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771837879.1270628-319-43914338046515/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 23 04:11:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9571 DF PROTO=TCP SPT=36802 DPT=9100 SEQ=1197793143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0113A27E0000000001030307) Feb 23 04:11:21 localhost python3.9[128254]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 23 04:11:21 localhost python3.9[128346]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 23 04:11:22 localhost python3.9[128438]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 23 04:11:22 localhost python3.9[128530]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 23 04:11:23 localhost python3.9[128620]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:11:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9573 DF PROTO=TCP SPT=36802 DPT=9100 SEQ=1197793143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0113AE840000000001030307) Feb 23 04:11:24 localhost python3.9[128714]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:11:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15046 DF PROTO=TCP SPT=43466 DPT=9101 SEQ=4280001095 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0113B7830000000001030307) Feb 23 04:11:28 localhost python3.9[128808]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:11:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12776 DF PROTO=TCP SPT=49518 DPT=9105 SEQ=2553912059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0113CD030000000001030307) Feb 23 04:11:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12777 DF PROTO=TCP SPT=49518 DPT=9105 SEQ=2553912059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0113D1030000000001030307) Feb 23 04:11:33 localhost python3.9[128902]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:11:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12778 DF PROTO=TCP SPT=49518 DPT=9105 SEQ=2553912059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0113D9030000000001030307) Feb 23 04:11:37 localhost python3.9[129002]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:11:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12779 DF PROTO=TCP SPT=49518 DPT=9105 SEQ=2553912059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0113E8C30000000001030307) Feb 23 04:11:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3275 DF PROTO=TCP SPT=34140 DPT=9101 SEQ=1732874894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0113F1700000000001030307) Feb 23 04:11:41 localhost python3.9[129096]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:11:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32065 DF PROTO=TCP SPT=41012 DPT=9882 SEQ=127702731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0113FD830000000001030307) Feb 23 04:11:45 localhost sshd[129113]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:11:46 localhost python3.9[129192]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:11:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12780 DF PROTO=TCP SPT=49518 DPT=9105 SEQ=2553912059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011409830000000001030307) Feb 23 04:11:50 localhost python3.9[129286]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:11:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64227 DF PROTO=TCP SPT=33218 DPT=9100 SEQ=1199371789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011417AF0000000001030307) Feb 23 04:11:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9576 DF PROTO=TCP SPT=36802 DPT=9100 SEQ=1197793143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01141F830000000001030307) Feb 23 04:11:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3279 DF PROTO=TCP SPT=34140 DPT=9101 SEQ=1732874894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01142D830000000001030307) Feb 23 04:12:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16297 DF PROTO=TCP SPT=57346 DPT=9105 SEQ=239231415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011442330000000001030307) Feb 23 04:12:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16298 DF PROTO=TCP SPT=57346 DPT=9105 SEQ=239231415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011446430000000001030307) Feb 23 04:12:03 localhost python3.9[129580]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:12:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16299 DF PROTO=TCP SPT=57346 DPT=9105 SEQ=239231415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01144E440000000001030307) Feb 23 04:12:07 localhost python3.9[129675]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:12:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16300 DF PROTO=TCP SPT=57346 DPT=9105 SEQ=239231415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01145E030000000001030307) Feb 23 04:12:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62320 DF PROTO=TCP SPT=60252 DPT=9101 SEQ=347759735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0114669F0000000001030307) Feb 23 04:12:11 localhost python3.9[129773]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:12:12 localhost python3.9[129878]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:12:12 localhost python3.9[129951]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1771837931.948644-772-112350581373859/.source.json _original_basename=.4y5mjdn3 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:12:13 localhost python3.9[130043]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 23 04:12:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57519 DF PROTO=TCP SPT=44624 DPT=9102 SEQ=2834028843 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011471830000000001030307) Feb 23 04:12:14 localhost systemd-journald[48305]: Field hash table of /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation. Feb 23 04:12:14 localhost systemd-journald[48305]: /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 04:12:14 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 04:12:14 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 04:12:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16301 DF PROTO=TCP SPT=57346 DPT=9105 SEQ=239231415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01147D830000000001030307) Feb 23 04:12:19 localhost podman[130055]: 2026-02-23 09:12:14.096556176 +0000 UTC m=+0.042924629 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Feb 23 04:12:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42137 DF PROTO=TCP SPT=51730 DPT=9100 SEQ=3238146586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01148CDE0000000001030307) Feb 23 04:12:21 localhost python3.9[130254]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 23 04:12:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42139 DF PROTO=TCP SPT=51730 DPT=9100 SEQ=3238146586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011499040000000001030307) Feb 23 04:12:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62324 DF PROTO=TCP SPT=60252 DPT=9101 SEQ=347759735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0114A3830000000001030307) Feb 23 04:12:28 localhost podman[130268]: 2026-02-23 09:12:21.519561612 +0000 UTC m=+0.031091618 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 23 04:12:30 localhost python3.9[130469]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 23 04:12:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10785 DF PROTO=TCP SPT=33166 DPT=9105 SEQ=284719428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0114B7630000000001030307) Feb 23 04:12:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10786 DF PROTO=TCP SPT=33166 DPT=9105 SEQ=284719428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0114BB840000000001030307) Feb 23 04:12:35 localhost sshd[130508]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:12:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10787 DF PROTO=TCP SPT=33166 DPT=9105 SEQ=284719428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0114C3830000000001030307) Feb 23 04:12:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10788 DF PROTO=TCP SPT=33166 DPT=9105 SEQ=284719428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0114D3430000000001030307) Feb 23 04:12:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43247 DF PROTO=TCP SPT=56308 DPT=9101 SEQ=2852867487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0114DBCF0000000001030307) Feb 23 04:12:43 localhost podman[130482]: 2026-02-23 09:12:30.771212885 +0000 UTC m=+0.049013160 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 23 04:12:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31597 DF PROTO=TCP SPT=42508 DPT=9882 SEQ=2860396265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0114E7840000000001030307) Feb 23 04:12:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10789 DF PROTO=TCP SPT=33166 DPT=9105 SEQ=284719428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0114F3830000000001030307) Feb 23 04:12:48 localhost python3.9[131214]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 23 04:12:49 localhost podman[131226]: 2026-02-23 09:12:48.35454558 +0000 UTC m=+0.032703557 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Feb 23 04:12:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32223 DF PROTO=TCP SPT=59940 DPT=9100 SEQ=1066849232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115020F0000000001030307) Feb 23 04:12:51 localhost python3.9[131389]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 23 04:12:52 localhost podman[131403]: 2026-02-23 09:12:51.249760489 +0000 UTC m=+0.030469369 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:12:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42142 DF PROTO=TCP SPT=51730 DPT=9100 SEQ=3238146586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011509830000000001030307) Feb 23 04:12:53 localhost python3.9[131568]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 23 04:12:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43251 DF PROTO=TCP SPT=56308 DPT=9101 SEQ=2852867487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011517840000000001030307) Feb 23 04:12:57 localhost podman[131582]: 2026-02-23 09:12:53.564915993 +0000 UTC m=+0.047114490 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Feb 23 04:12:58 localhost python3.9[131759]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 23 04:12:59 localhost podman[131772]: 2026-02-23 09:12:58.306310572 +0000 UTC m=+0.044584063 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Feb 23 04:13:00 localhost systemd[1]: session-39.scope: Deactivated successfully. Feb 23 04:13:00 localhost systemd[1]: session-39.scope: Consumed 2min 803ms CPU time. Feb 23 04:13:00 localhost systemd-logind[759]: Session 39 logged out. Waiting for processes to exit. Feb 23 04:13:00 localhost systemd-logind[759]: Removed session 39. Feb 23 04:13:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5085 DF PROTO=TCP SPT=52976 DPT=9105 SEQ=1036209406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01152C930000000001030307) Feb 23 04:13:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5086 DF PROTO=TCP SPT=52976 DPT=9105 SEQ=1036209406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011530830000000001030307) Feb 23 04:13:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5087 DF PROTO=TCP SPT=52976 DPT=9105 SEQ=1036209406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011538830000000001030307) Feb 23 04:13:06 localhost sshd[131958]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:13:06 localhost systemd-logind[759]: New session 40 of user zuul. Feb 23 04:13:06 localhost systemd[1]: Started Session 40 of User zuul. Feb 23 04:13:07 localhost python3.9[132051]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:13:08 localhost python3.9[132147]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Feb 23 04:13:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5088 DF PROTO=TCP SPT=52976 DPT=9105 SEQ=1036209406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011548440000000001030307) Feb 23 04:13:10 localhost python3.9[132241]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:13:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24231 DF PROTO=TCP SPT=36196 DPT=9101 SEQ=3806862455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011551000000000001030307) Feb 23 04:13:11 localhost python3.9[132295]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:13:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12632 DF PROTO=TCP SPT=46726 DPT=9102 SEQ=3674554648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01155B830000000001030307) Feb 23 04:13:17 localhost python3.9[132389]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:13:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5089 DF PROTO=TCP SPT=52976 DPT=9105 SEQ=1036209406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011569840000000001030307) Feb 23 04:13:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28085 DF PROTO=TCP SPT=58344 DPT=9100 SEQ=453822368 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115773E0000000001030307) Feb 23 04:13:22 localhost python3.9[132483]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 23 04:13:22 localhost sshd[132501]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:13:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28087 DF PROTO=TCP SPT=58344 DPT=9100 SEQ=453822368 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011583440000000001030307) Feb 23 04:13:24 localhost python3.9[132578]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:13:25 localhost python3.9[132670]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Feb 23 04:13:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24235 DF PROTO=TCP SPT=36196 DPT=9101 SEQ=3806862455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01158D830000000001030307) Feb 23 04:13:27 localhost kernel: SELinux: Converting 2743 SID table entries... Feb 23 04:13:27 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 04:13:27 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 04:13:27 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 04:13:27 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 04:13:27 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 04:13:27 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 04:13:27 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 04:13:28 localhost python3.9[132767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:13:29 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=18 res=1 Feb 23 04:13:29 localhost python3.9[132865]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:13:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40953 DF PROTO=TCP SPT=51076 DPT=9105 SEQ=3398510107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115A1C30000000001030307) Feb 23 04:13:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40954 DF PROTO=TCP SPT=51076 DPT=9105 SEQ=3398510107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115A5C30000000001030307) Feb 23 04:13:33 localhost python3.9[132959]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:13:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40955 DF PROTO=TCP SPT=51076 DPT=9105 SEQ=3398510107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115ADC40000000001030307) Feb 23 04:13:35 localhost python3.9[133204]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None Feb 23 04:13:36 localhost python3.9[133294]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:13:36 localhost python3.9[133388]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:13:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40956 DF PROTO=TCP SPT=51076 DPT=9105 SEQ=3398510107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115BD830000000001030307) Feb 23 04:13:41 localhost python3.9[133482]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:13:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59155 DF PROTO=TCP SPT=49726 DPT=9101 SEQ=3712594871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115C62F0000000001030307) Feb 23 04:13:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55504 DF PROTO=TCP SPT=60076 DPT=9102 SEQ=411398656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115D1830000000001030307) Feb 23 04:13:45 localhost python3.9[133576]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 23 04:13:45 localhost systemd[1]: Reloading. Feb 23 04:13:45 localhost systemd-sysv-generator[133608]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:13:45 localhost systemd-rc-local-generator[133604]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:13:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:13:45 localhost sshd[133631]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:13:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40957 DF PROTO=TCP SPT=51076 DPT=9105 SEQ=3398510107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115DD840000000001030307) Feb 23 04:13:47 localhost python3.9[133710]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:13:48 localhost python3.9[133802]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:13:49 localhost python3.9[133896]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:13:49 localhost python3.9[133988]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:13:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28721 DF PROTO=TCP SPT=45208 DPT=9100 SEQ=1652285120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115EC6F0000000001030307) Feb 23 04:13:51 localhost python3.9[134080]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:13:51 localhost python3.9[134153]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838030.067657-565-25346733360665/.source _original_basename=.hvsdqof9 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:13:52 localhost python3.9[134245]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:13:53 localhost python3.9[134337]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Feb 23 04:13:53 localhost auditd[725]: Audit daemon rotating log files Feb 23 04:13:53 localhost python3.9[134429]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:13:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28723 DF PROTO=TCP SPT=45208 DPT=9100 SEQ=1652285120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0115F8830000000001030307) Feb 23 04:13:54 localhost python3.9[134521]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:13:55 localhost python3.9[134594]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838034.1098151-691-97671867804294/.source.yaml _original_basename=.bybfbh03 follow=False checksum=06d744ebe702728c19f6d1a8f97158d086012058 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:13:55 localhost python3.9[134686]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Feb 23 04:13:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59159 DF PROTO=TCP SPT=49726 DPT=9101 SEQ=3712594871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011601840000000001030307) Feb 23 04:13:57 localhost ansible-async_wrapper.py[134791]: Invoked with j650797580268 300 /home/zuul/.ansible/tmp/ansible-tmp-1771838036.3538342-763-53428839714968/AnsiballZ_edpm_os_net_config.py _ Feb 23 04:13:57 localhost ansible-async_wrapper.py[134794]: Starting module and watcher Feb 23 04:13:57 localhost ansible-async_wrapper.py[134794]: Start watching 134795 (300) Feb 23 04:13:57 localhost ansible-async_wrapper.py[134795]: Start module (134795) Feb 23 04:13:57 localhost ansible-async_wrapper.py[134791]: Return async_wrapper task started. Feb 23 04:13:57 localhost python3.9[134796]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True remove_config=False safe_defaults=False use_nmstate=False purge_provider= Feb 23 04:13:58 localhost ansible-async_wrapper.py[134795]: Module complete (134795) Feb 23 04:14:00 localhost python3.9[134929]: ansible-ansible.legacy.async_status Invoked with jid=j650797580268.134791 mode=status _async_dir=/root/.ansible_async Feb 23 04:14:01 localhost python3.9[135023]: ansible-ansible.legacy.async_status Invoked with jid=j650797580268.134791 mode=cleanup _async_dir=/root/.ansible_async Feb 23 04:14:01 localhost systemd[1]: tmp-crun.WeLHFh.mount: Deactivated successfully. Feb 23 04:14:01 localhost podman[135073]: 2026-02-23 09:14:01.469020149 +0000 UTC m=+0.090680943 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, RELEASE=main, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=) Feb 23 04:14:01 localhost podman[135073]: 2026-02-23 09:14:01.540122687 +0000 UTC m=+0.161783491 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , RELEASE=main, release=1770267347, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, ceph=True, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:14:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13481 DF PROTO=TCP SPT=50076 DPT=9105 SEQ=363976217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011616F30000000001030307) Feb 23 04:14:02 localhost ansible-async_wrapper.py[134794]: Done in kid B. Feb 23 04:14:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13482 DF PROTO=TCP SPT=50076 DPT=9105 SEQ=363976217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01161B030000000001030307) Feb 23 04:14:03 localhost python3.9[135280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:14:04 localhost python3.9[135368]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838043.4042337-829-26964267627609/.source.returncode _original_basename=.s7jlo30k follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:14:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13483 DF PROTO=TCP SPT=50076 DPT=9105 SEQ=363976217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011623030000000001030307) Feb 23 04:14:05 localhost python3.9[135460]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:14:05 localhost python3.9[135533]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838044.6504567-877-232056147199090/.source.cfg _original_basename=.qpswhm71 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:14:06 localhost python3.9[135625]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:14:06 localhost systemd[1]: Reloading Network Manager... Feb 23 04:14:06 localhost NetworkManager[5987]: [1771838046.4493] audit: op="reload" arg="0" pid=135629 uid=0 result="success" Feb 23 04:14:06 localhost NetworkManager[5987]: [1771838046.4504] config: signal: SIGHUP (no changes from disk) Feb 23 04:14:06 localhost systemd[1]: Reloaded Network Manager. Feb 23 04:14:07 localhost systemd[1]: session-40.scope: Deactivated successfully. Feb 23 04:14:07 localhost systemd[1]: session-40.scope: Consumed 35.907s CPU time. Feb 23 04:14:07 localhost systemd-logind[759]: Session 40 logged out. Waiting for processes to exit. Feb 23 04:14:07 localhost systemd-logind[759]: Removed session 40. Feb 23 04:14:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13484 DF PROTO=TCP SPT=50076 DPT=9105 SEQ=363976217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011632C30000000001030307) Feb 23 04:14:10 localhost sshd[135644]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:14:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32355 DF PROTO=TCP SPT=50960 DPT=9101 SEQ=3776133252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01163B600000000001030307) Feb 23 04:14:11 localhost sshd[135646]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:14:12 localhost systemd-logind[759]: New session 41 of user zuul. Feb 23 04:14:12 localhost systemd[1]: Started Session 41 of User zuul. Feb 23 04:14:13 localhost python3.9[135739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:14:14 localhost python3.9[135833]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:14:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32357 DF PROTO=TCP SPT=50960 DPT=9101 SEQ=3776133252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011647830000000001030307) Feb 23 04:14:15 localhost python3.9[135978]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:14:16 localhost systemd[1]: session-41.scope: Deactivated successfully. Feb 23 04:14:16 localhost systemd[1]: session-41.scope: Consumed 2.156s CPU time. Feb 23 04:14:16 localhost systemd-logind[759]: Session 41 logged out. Waiting for processes to exit. Feb 23 04:14:16 localhost systemd-logind[759]: Removed session 41. Feb 23 04:14:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13485 DF PROTO=TCP SPT=50076 DPT=9105 SEQ=363976217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011653830000000001030307) Feb 23 04:14:21 localhost sshd[135994]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:14:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31590 DF PROTO=TCP SPT=49564 DPT=9100 SEQ=2201631956 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0116619F0000000001030307) Feb 23 04:14:21 localhost systemd-logind[759]: New session 42 of user zuul. Feb 23 04:14:21 localhost systemd[1]: Started Session 42 of User zuul. Feb 23 04:14:22 localhost python3.9[136087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:14:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28726 DF PROTO=TCP SPT=45208 DPT=9100 SEQ=1652285120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011669830000000001030307) Feb 23 04:14:23 localhost python3.9[136181]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:14:24 localhost python3.9[136277]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:14:25 localhost python3.9[136331]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:14:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32359 DF PROTO=TCP SPT=50960 DPT=9101 SEQ=3776133252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011677830000000001030307) Feb 23 04:14:29 localhost python3.9[136425]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:14:30 localhost python3.9[136572]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:14:31 localhost python3.9[136664]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:14:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12068 DF PROTO=TCP SPT=59526 DPT=9105 SEQ=2447157187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01168C240000000001030307) Feb 23 04:14:32 localhost python3.9[136767]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:14:32 localhost python3.9[136815]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:14:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12069 DF PROTO=TCP SPT=59526 DPT=9105 SEQ=2447157187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011690430000000001030307) Feb 23 04:14:33 localhost python3.9[136907]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:14:33 localhost python3.9[136955]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:14:34 localhost python3.9[137047]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 23 04:14:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12070 DF PROTO=TCP SPT=59526 DPT=9105 SEQ=2447157187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011698430000000001030307) Feb 23 04:14:35 localhost python3.9[137139]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 23 04:14:35 localhost python3.9[137231]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 23 04:14:36 localhost python3.9[137323]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 23 04:14:37 localhost python3.9[137415]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:14:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12071 DF PROTO=TCP SPT=59526 DPT=9105 SEQ=2447157187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0116A8040000000001030307) Feb 23 04:14:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8912 DF PROTO=TCP SPT=42016 DPT=9101 SEQ=2071679521 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0116B0900000000001030307) Feb 23 04:14:41 localhost python3.9[137509]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:14:42 localhost python3.9[137603]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:14:42 localhost python3.9[137695]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:14:43 localhost python3.9[137787]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:14:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4036 DF PROTO=TCP SPT=46784 DPT=9102 SEQ=1044259311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0116BB830000000001030307) Feb 23 04:14:44 localhost python3.9[137880]: ansible-service_facts Invoked Feb 23 04:14:44 localhost network[137897]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 04:14:44 localhost network[137898]: 'network-scripts' will be removed from distribution in near future. Feb 23 04:14:44 localhost network[137899]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 04:14:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:14:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12072 DF PROTO=TCP SPT=59526 DPT=9105 SEQ=2447157187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0116C7830000000001030307) Feb 23 04:14:50 localhost python3.9[138221]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:14:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23946 DF PROTO=TCP SPT=37240 DPT=9100 SEQ=772573709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0116D6CF0000000001030307) Feb 23 04:14:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23948 DF PROTO=TCP SPT=37240 DPT=9100 SEQ=772573709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0116E2C30000000001030307) Feb 23 04:14:55 localhost python3.9[138315]: ansible-package_facts Invoked with manager=['auto'] strategy=first Feb 23 04:14:55 localhost sshd[138330]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:14:56 localhost python3.9[138409]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:14:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8916 DF PROTO=TCP SPT=42016 DPT=9101 SEQ=2071679521 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0116ED840000000001030307) Feb 23 04:14:57 localhost python3.9[138484]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838096.0506048-653-183970142215839/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:14:58 localhost python3.9[138578]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:14:58 localhost python3.9[138653]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838097.5608695-697-98359331269716/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:00 localhost python3.9[138747]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:01 localhost python3.9[138841]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:15:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13354 DF PROTO=TCP SPT=41216 DPT=9105 SEQ=1969402000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011701540000000001030307) Feb 23 04:15:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13355 DF PROTO=TCP SPT=41216 DPT=9105 SEQ=1969402000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011705440000000001030307) Feb 23 04:15:03 localhost python3.9[138895]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:15:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13356 DF PROTO=TCP SPT=41216 DPT=9105 SEQ=1969402000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01170D430000000001030307) Feb 23 04:15:05 localhost python3.9[139051]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:15:06 localhost python3.9[139120]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:15:06 localhost chronyd[26162]: chronyd exiting Feb 23 04:15:06 localhost systemd[1]: Stopping NTP client/server... Feb 23 04:15:06 localhost systemd[1]: chronyd.service: Deactivated successfully. Feb 23 04:15:06 localhost systemd[1]: Stopped NTP client/server. Feb 23 04:15:06 localhost systemd[1]: Starting NTP client/server... Feb 23 04:15:06 localhost chronyd[139128]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 23 04:15:06 localhost chronyd[139128]: Frequency -30.398 +/- 0.213 ppm read from /var/lib/chrony/drift Feb 23 04:15:06 localhost chronyd[139128]: Loaded seccomp filter (level 2) Feb 23 04:15:06 localhost systemd[1]: Started NTP client/server. Feb 23 04:15:07 localhost systemd-logind[759]: Session 42 logged out. Waiting for processes to exit. Feb 23 04:15:07 localhost systemd[1]: session-42.scope: Deactivated successfully. Feb 23 04:15:07 localhost systemd[1]: session-42.scope: Consumed 28.437s CPU time. Feb 23 04:15:07 localhost systemd-logind[759]: Removed session 42. Feb 23 04:15:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13357 DF PROTO=TCP SPT=41216 DPT=9105 SEQ=1969402000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01171D040000000001030307) Feb 23 04:15:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34623 DF PROTO=TCP SPT=57946 DPT=9101 SEQ=674789549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011725C00000000001030307) Feb 23 04:15:12 localhost sshd[139144]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:15:12 localhost systemd-logind[759]: New session 43 of user zuul. Feb 23 04:15:12 localhost systemd[1]: Started Session 43 of User zuul. Feb 23 04:15:13 localhost python3.9[139237]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:15:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30459 DF PROTO=TCP SPT=45704 DPT=9882 SEQ=3227279181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011731840000000001030307) Feb 23 04:15:15 localhost python3.9[139333]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:15 localhost python3.9[139438]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:16 localhost python3.9[139486]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.jg2j9pln recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:17 localhost python3.9[139578]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13358 DF PROTO=TCP SPT=41216 DPT=9105 SEQ=1969402000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01173D830000000001030307) Feb 23 04:15:18 localhost python3.9[139653]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838116.8470833-139-132693216617779/.source _original_basename=.vjs_431r follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:18 localhost python3.9[139745]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:15:19 localhost python3.9[139837]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:19 localhost python3.9[139910]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838118.9139593-211-148067405424140/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 23 04:15:20 localhost python3.9[140002]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:21 localhost python3.9[140075]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838120.04155-211-215050490154875/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 23 04:15:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12573 DF PROTO=TCP SPT=50262 DPT=9100 SEQ=3907143992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01174BFF0000000001030307) Feb 23 04:15:21 localhost python3.9[140167]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:22 localhost python3.9[140259]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:22 localhost python3.9[140332]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838121.9524236-322-163600608203754/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23951 DF PROTO=TCP SPT=37240 DPT=9100 SEQ=772573709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011753830000000001030307) Feb 23 04:15:23 localhost python3.9[140424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:24 localhost python3.9[140497]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838123.1867478-367-110132103406549/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:25 localhost python3.9[140589]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:15:25 localhost systemd[1]: Reloading. Feb 23 04:15:25 localhost systemd-rc-local-generator[140610]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:15:25 localhost systemd-sysv-generator[140615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:15:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:15:25 localhost systemd[1]: Reloading. Feb 23 04:15:25 localhost systemd-sysv-generator[140653]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:15:25 localhost systemd-rc-local-generator[140649]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:15:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:15:25 localhost systemd[1]: Starting EDPM Container Shutdown... Feb 23 04:15:25 localhost systemd[1]: Finished EDPM Container Shutdown. Feb 23 04:15:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34627 DF PROTO=TCP SPT=57946 DPT=9101 SEQ=674789549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011761830000000001030307) Feb 23 04:15:26 localhost python3.9[140757]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:27 localhost python3.9[140830]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838126.2459643-436-107087288015721/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:27 localhost python3.9[140922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:28 localhost python3.9[140995]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838127.4121358-481-23236287571000/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:29 localhost python3.9[141087]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:15:29 localhost systemd[1]: Reloading. Feb 23 04:15:29 localhost systemd-rc-local-generator[141114]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:15:29 localhost systemd-sysv-generator[141118]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:15:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:15:29 localhost systemd[1]: Starting Create netns directory... Feb 23 04:15:29 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 23 04:15:29 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 23 04:15:29 localhost systemd[1]: Finished Create netns directory. Feb 23 04:15:30 localhost python3.9[141219]: ansible-ansible.builtin.service_facts Invoked Feb 23 04:15:30 localhost network[141236]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 04:15:30 localhost network[141237]: 'network-scripts' will be removed from distribution in near future. Feb 23 04:15:30 localhost network[141238]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 04:15:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42483 DF PROTO=TCP SPT=46006 DPT=9105 SEQ=267221027 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011776830000000001030307) Feb 23 04:15:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42484 DF PROTO=TCP SPT=46006 DPT=9105 SEQ=267221027 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01177A830000000001030307) Feb 23 04:15:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:15:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42485 DF PROTO=TCP SPT=46006 DPT=9105 SEQ=267221027 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011782840000000001030307) Feb 23 04:15:36 localhost python3.9[141440]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:37 localhost python3.9[141515]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838136.1960876-604-191961810872440/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:38 localhost python3.9[141608]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:15:38 localhost systemd[1]: Reloading OpenSSH server daemon... Feb 23 04:15:38 localhost systemd[1]: Reloaded OpenSSH server daemon. Feb 23 04:15:38 localhost sshd[121016]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:15:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42486 DF PROTO=TCP SPT=46006 DPT=9105 SEQ=267221027 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011792440000000001030307) Feb 23 04:15:39 localhost python3.9[141704]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:39 localhost python3.9[141796]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:40 localhost python3.9[141869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838139.3463624-697-25268513244037/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40992 DF PROTO=TCP SPT=39334 DPT=9101 SEQ=926765396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01179AF00000000001030307) Feb 23 04:15:41 localhost python3.9[141961]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Feb 23 04:15:41 localhost systemd[1]: Starting Time & Date Service... Feb 23 04:15:41 localhost systemd[1]: Started Time & Date Service. Feb 23 04:15:42 localhost python3.9[142057]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:43 localhost sshd[142150]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:15:43 localhost python3.9[142149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:43 localhost python3.9[142224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838142.828714-802-84072461595575/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10368 DF PROTO=TCP SPT=55544 DPT=9102 SEQ=2224464395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0117A5830000000001030307) Feb 23 04:15:44 localhost python3.9[142316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:45 localhost python3.9[142389]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838144.0117207-847-202999972940136/.source.yaml _original_basename=.fyhqwlkg follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:45 localhost python3.9[142481]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:46 localhost python3.9[142556]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838145.2479289-893-86703664730463/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:47 localhost python3.9[142648]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:15:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42487 DF PROTO=TCP SPT=46006 DPT=9105 SEQ=267221027 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0117B3840000000001030307) Feb 23 04:15:47 localhost python3.9[142741]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:15:48 localhost python3[142834]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 23 04:15:49 localhost python3.9[142926]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:49 localhost python3.9[142999]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838148.9730213-1009-101674612522770/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:50 localhost python3.9[143091]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33903 DF PROTO=TCP SPT=46816 DPT=9100 SEQ=2300685785 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0117C12F0000000001030307) Feb 23 04:15:51 localhost python3.9[143164]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838150.1835139-1054-33567097275501/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:51 localhost python3.9[143256]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:52 localhost python3.9[143329]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838151.4576783-1099-215056736055508/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:53 localhost python3.9[143421]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:53 localhost python3.9[143494]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838152.6922228-1144-218308719649928/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:54 localhost python3.9[143586]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:15:54 localhost python3.9[143659]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838153.8982127-1189-74954837061234/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23952 DF PROTO=TCP SPT=37240 DPT=9100 SEQ=772573709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0117D1830000000001030307) Feb 23 04:15:55 localhost python3.9[143751]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:56 localhost python3.9[143843]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:15:57 localhost python3.9[143938]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:58 localhost python3.9[144031]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:58 localhost python3.9[144123]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:15:59 localhost python3.9[144215]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Feb 23 04:16:00 localhost python3.9[144308]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Feb 23 04:16:00 localhost systemd-logind[759]: Session 43 logged out. Waiting for processes to exit. Feb 23 04:16:00 localhost systemd[1]: session-43.scope: Deactivated successfully. Feb 23 04:16:00 localhost systemd[1]: session-43.scope: Consumed 28.121s CPU time. Feb 23 04:16:00 localhost systemd-logind[759]: Removed session 43. Feb 23 04:16:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7458 DF PROTO=TCP SPT=33866 DPT=9105 SEQ=174450617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0117EBB40000000001030307) Feb 23 04:16:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13360 DF PROTO=TCP SPT=41216 DPT=9105 SEQ=1969402000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0117FB830000000001030307) Feb 23 04:16:06 localhost sshd[144388]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:16:06 localhost systemd-logind[759]: New session 44 of user zuul. Feb 23 04:16:06 localhost systemd[1]: Started Session 44 of User zuul. Feb 23 04:16:07 localhost python3.9[144498]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Feb 23 04:16:09 localhost python3.9[144590]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:16:10 localhost python3.9[144684]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Feb 23 04:16:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26572 DF PROTO=TCP SPT=56072 DPT=9101 SEQ=823247467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118101F0000000001030307) Feb 23 04:16:11 localhost python3.9[144776]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.9miy6flx follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:16:11 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 23 04:16:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55127 DF PROTO=TCP SPT=58666 DPT=9102 SEQ=2188363118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011813D60000000001030307) Feb 23 04:16:12 localhost python3.9[144855]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.9miy6flx mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838171.1445546-190-79139599965511/.source.9miy6flx _original_basename=.o96p3h4z follow=False checksum=d1d6d40786432d7ee1aec581e269930dfc2795e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:16:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18169 DF PROTO=TCP SPT=60640 DPT=9882 SEQ=4096300429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118159F0000000001030307) Feb 23 04:16:14 localhost python3.9[144947]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:16:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34629 DF PROTO=TCP SPT=57946 DPT=9101 SEQ=674789549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01181F840000000001030307) Feb 23 04:16:15 localhost python3.9[145039]: ansible-ansible.builtin.blockinfile Invoked with block=np0005626466.localdomain,192.168.122.108,np0005626466* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD4dg5LfbOyIHJudQjfDyIcqYXRqMUeYQIpjQPmNS0Tl7/EpBaYixjqlNovKIWOwkS4E2n4hwPLSTGSihYb5BeUDw32T80RumycS2tjBCSLiuq93xpTOaL2X+7wykkOSfY5xya13qrTg0ROJip0B6PSSF+Rn28SAKLh91euCdRaxWTAMeOSTP9WeCA3d0gsgb4xSMMWZxR4o1BU2bixjAcJHAlKYDc1OGpKkirRoziu9Y4nq2lmbwTg5HiS8STVkqyGHba9k6IC0eF2ZmT6M2thoHatYVtjuUeEE9bSvaAFB8oSI9Np6+OaluvuoKJYjRA3dzEQOi4ft/wwUrJfvyypDAxKBkxo7lCWIDEBK5Zb9BVoo68psz2IVPNGNZJtKXiq58CAqZTR02l/wEq4wB1/hp7ZW+ZMnHQUq1FpGITIA89KZeL9xNlnHqYak58B2GCYgK6OdvWktr4WHN8nbEmwZvaTrijZvnww7h2FQG4BMcSlO6AWKAdjksJZlVDYLJs=#012np0005626466.localdomain,192.168.122.108,np0005626466* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIiaRdmYDJrMg8atO+fnuqzJdDL1JaVGt341/g0QTv04#012np0005626466.localdomain,192.168.122.108,np0005626466* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGMyxprJk2KMNU4/eWUo8EdX2W79HO4pGHl3Ze8LEhDdSbCzY8uy6KD6met+RL0bD767zsXbqEV/9peHg1x5qjM=#012np0005626459.localdomain,192.168.122.103,np0005626459* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9VsrIfV6Z4AiMtHfmjOpcBCt5sMsGmP0fOSak1UBP4r9lW4eYyoJY7Rtt1LDAcbGqdL3Nh3yc8ub0ekpXF6MA0vKucLb+jtjexv6t21W2grJ+ucwsvDhTDhDXmOUwD5G7A9Zj2WDqt/DN4DxeEqvQ6v1dSQaG+17BVPvM7mhgd5CSYOdUphCC81TPZgj3xyK31Q89biIS6pCBSKnsyN7qcU38bFGvRN0sTFaFt9KrIUfJJdcAZudw5Q/R775pmaaeHTSVPL05gE7dyz8RicEpenh6X0aZCOVt0+4VBnfXXSIL9QIwjrarPPKRdtmQY7dZ3dVNI1ZWA5YOl0y6R3fmxaRV5y1ZkDW6vG0463hYjKaAVqILAAPZGzhuzL7/1zxIv0guUB58tOUrCkkPIRzd6NQLL2j8L7RLIj3bZjG2xf0WiierxPsCEhl3wmdIVRUReE6jYalNGlscGUr1JWproKoaQqfck0OWhGy7jCCe8Gd8a/pr7jtg+X3bEMQ3HAc=#012np0005626459.localdomain,192.168.122.103,np0005626459* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB1G62+/VP1cWp/d17CbWxlG5w4IEqmUSSc9SyShSsKo#012np0005626459.localdomain,192.168.122.103,np0005626459* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF1G+CYZWMPROBz875F8bjcexPOeozjteUw/Fu+xHwwpYK4DPmCNq+JbW1AmCaltVkHRnMMPqLBom+3c+ekTh4E=#012np0005626460.localdomain,192.168.122.104,np0005626460* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeQmwl5IUCA7h6xphf+o3WARi0Xlj+0K08ltN/FCX7iF0EALCfDqtKOHz7wv5gS04Zx4aeNfcVHv9bHLRJxTPzliSNVutqA7vdFa0R/kRMdNzkqSOCuJ64sQ8GwSOHSrcFy7qC87BuP6xB9atSBjpAEB4NZOuXbvmSN/dCa/nNpUWoWNNg3eR5AalrExCptFYZ4E7YWvJ6HdZpr1QhcAJW0V1y4+u4FfzxHT2SQfGmua4TFHH1lUMiMrgAoELLe+pYdnWooEhRlkPulWy/wOyNz7aCCDP462XBhCc0CmiBDRwMBaJISck1pJCOIksvu8TYa6Fp8aayZqJvbUJYl5C1Z/o+zgHMTjeec0Th5GIuw9XUJkkx8TT5Fh7aWJvX9BbHlMaJjAqc+G/wiIImvKlsuIsovU6TH0P/XiysoWXeUWM7JqR8Y/05+yELy+xAMKT7PfEXE1fWOlGcCJsarLYGhh/7Jypwfh8Y/wOtYdKOGODxDnzq2f2VySsEiAf0EL0=#012np0005626460.localdomain,192.168.122.104,np0005626460* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILDN/X/h1SJivdlJg6UrBmlF7YgESQ24kCjH//omBjn3#012np0005626460.localdomain,192.168.122.104,np0005626460* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLjVObKHLJCn+kOorv0tRLu5M/EwGgxQnczR69veoTwgXNRB/xCzi30v7fJ2uWbQGJXou02P5IiwAQmFSv1vKpE=#012np0005626465.localdomain,192.168.122.107,np0005626465* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUc8l2oYgfdO7xb3vN27co3Q/sFNU6Rw5wThiW1JMfeIzI90ZzS/L+BpsDsX8q2CW9QOHXrbUormpGsiNnix5j1P29Jc6e9A2mDlipXBrFSUiVZa8UOL03lFSz4nElapkASin2GCdHqy7//gGdQMKRP62VXpdhofb7i/N/gGoV5hSc8Q36KFDbWpvPkhD5H8nZtAfyxM99KwlC62D8jSN+gdoRtMRFPQTtyvyskyrgnXGC6xV71WTa6LJ6Meo7tfj4JlvDAWwlD+f9Ruu2ty2aHd2feVVKYvxZ4Z45iSfJnNxRFJvu1QOY0IU4Fj942leKwr6f0B5ogPFlTI7wRrAB1d9tri1WW2aL1AqYhdZscWi0VArYxLQr7BCVqz8KgFIzjbPoJ7uYnWcuDSiWlC1NJVO7Ij2natf8wZyvSyH+vydamkyoaNwxMnm4qs0/rvjwL49MdrHB79rXjHYJpt/JCBvn9a/rh5KqVH40P00DP35H71zyHPCSu1L20S/wY1k=#012np0005626465.localdomain,192.168.122.107,np0005626465* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMNM6I52u2PlIbUuPV1wF+vgd5UIhGpYLByAkJDxsiFm#012np0005626465.localdomain,192.168.122.107,np0005626465* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOxvbePT9GQElB7TGQuLRzkjxtXeKA7IbYbWBmgWolf09tVtPZHcG12wdG6fePoATmwyX4PIJb5sC28KiqtOgIE=#012np0005626461.localdomain,192.168.122.105,np0005626461* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDBCzU24t9gA5R+exm4rHJ2VytHuq8uUoKuu6SZ07dskKR77n7TwlsZhsDjpzwsddHd+lvsfvOVmolxjJsCmq7LJRMGA/mczHXsGGb43YPZPKsiJ6KMPDORy5/ihhnqixBYVmBGtdPu/Hh/udGnymZgR/RYGltDDHoCfGGiEcHJSIuf/Bv2Uv4xFnxFjDrWQFrkJ5Grq1xC7cGXgC3gAiTCjGHkG9rb/oyTUjjM8LaaRYIjeoDQZu1/8y5pl6cnhW21VTA+u55SkSimb/g5oOuSmrv899iHFwb54uLINXvA4aTtduUnxNQBVRyFvWa3yCZXVJeYlcVP8Q9tljn9anN1aISnS311Jmay6zUY927bxnzrpkwaV7Ggwtvi6vlVy84ZvOJ/IJ2boDiMujh1ZpT3bxXG3Oy0EjfBVbpkS6r2MbGTPj/xWnosJ6JNVbb9LW7Ftfi3/NFfAb7PpTgY036DA8LYoYIfqxVJUhlo5fJjqqOLa/zbvZVwrFCG+Zm160=#012np0005626461.localdomain,192.168.122.105,np0005626461* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID3BKJ5iitZOMOyRmWwrIHEgrBaSUAXcN/yddsH5p67P#012np0005626461.localdomain,192.168.122.105,np0005626461* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCX+ELPnNre0Bl1NdaYE8R/rtodFHjWfK7n06TW2wvAyLhge/A+53E2vGTXA9jfYXEEH2g0XKcYHlkb3dM70CTQ=#012np0005626463.localdomain,192.168.122.106,np0005626463* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/Caj4zYKd24ctvaRU1Hf9nT058OF4bRnDJ3bHimmkyIL7cccXAxo3lx50wZHWRYBhF5Wes6TmqnUTTK1h5wVdI8f7YtQ9IyMIlfoEiTThF5PgODVuRYq+YGjFIy7MTPyBnB2428aT4dlYqHSuxK2gL6ALlCJHNyeh3RW3jCOG89veDoRmbqHGoaD+xPRnfsdHLoLFNfxT4UJiKRuqsEd5fNtc392ROSa5XM3PPIs3YTypYmpfFHs1B1j+y6oZV8Ha/QXqURpI7/aJmfnDzXLMsLWp4GRpkwzljvNp87S5HL+kJMo79n0Vmh2JdN1orNP/4A2t/TENckHbrZCm+YmPqUqvpHkAZfFfmvP62YZTPq/qOjBMMq6ulGSHd2I4XfE7NNZRKoS3G4HVlBb0ONS13PaWx9rrJCRlF64L1dHSt9zpKrvRbWkSdXA0PwwehrU5/OBo1IY4WsRlWmPeET1/dFWiIr1t9uGjp5vmACAx7rnC6G5qSEhQ3/k1Wa57k/k=#012np0005626463.localdomain,192.168.122.106,np0005626463* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPpIpPeSZdEjLEgb7zYHVhKnwBDipROOVgmUJe3QzecH#012np0005626463.localdomain,192.168.122.106,np0005626463* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUV/eK8X671P+PPyOxoifS2hhEKYup7ygc301iPJDoOs3TgLodw2jNy/egXEc0x3WdkTwXltmBlHqmWw5ro05Q=#012 create=True mode=0644 path=/tmp/ansible.9miy6flx state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:16:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10369 DF PROTO=TCP SPT=55544 DPT=9102 SEQ=2224464395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011823840000000001030307) Feb 23 04:16:17 localhost python3.9[145131]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.9miy6flx' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:16:18 localhost python3.9[145225]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.9miy6flx state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:16:19 localhost systemd[1]: session-44.scope: Deactivated successfully. Feb 23 04:16:19 localhost systemd[1]: session-44.scope: Consumed 4.319s CPU time. Feb 23 04:16:19 localhost systemd-logind[759]: Session 44 logged out. Waiting for processes to exit. Feb 23 04:16:19 localhost systemd-logind[759]: Removed session 44. Feb 23 04:16:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37194 DF PROTO=TCP SPT=37886 DPT=9100 SEQ=185682126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118365E0000000001030307) Feb 23 04:16:24 localhost sshd[145240]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:16:24 localhost systemd-logind[759]: New session 45 of user zuul. Feb 23 04:16:24 localhost systemd[1]: Started Session 45 of User zuul. Feb 23 04:16:25 localhost python3.9[145333]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:16:27 localhost python3.9[145429]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 23 04:16:28 localhost python3.9[145523]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:16:29 localhost python3.9[145616]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:16:30 localhost python3.9[145709]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:16:31 localhost sshd[145804]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:16:31 localhost python3.9[145803]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:16:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28044 DF PROTO=TCP SPT=53286 DPT=9105 SEQ=1255963361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011860E30000000001030307) Feb 23 04:16:32 localhost python3.9[145900]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:16:32 localhost systemd[1]: session-45.scope: Deactivated successfully. Feb 23 04:16:32 localhost systemd[1]: session-45.scope: Consumed 3.952s CPU time. Feb 23 04:16:32 localhost systemd-logind[759]: Session 45 logged out. Waiting for processes to exit. Feb 23 04:16:32 localhost systemd-logind[759]: Removed session 45. Feb 23 04:16:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28045 DF PROTO=TCP SPT=53286 DPT=9105 SEQ=1255963361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011865040000000001030307) Feb 23 04:16:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28046 DF PROTO=TCP SPT=53286 DPT=9105 SEQ=1255963361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01186D040000000001030307) Feb 23 04:16:37 localhost sshd[145915]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:16:37 localhost systemd-logind[759]: New session 46 of user zuul. Feb 23 04:16:37 localhost systemd[1]: Started Session 46 of User zuul. Feb 23 04:16:38 localhost python3.9[146008]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:16:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28047 DF PROTO=TCP SPT=53286 DPT=9105 SEQ=1255963361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01187CC30000000001030307) Feb 23 04:16:39 localhost python3.9[146104]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:16:40 localhost python3.9[146158]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 23 04:16:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45666 DF PROTO=TCP SPT=38282 DPT=9101 SEQ=3339376431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011885500000000001030307) Feb 23 04:16:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5749 DF PROTO=TCP SPT=55446 DPT=9102 SEQ=907995503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011889060000000001030307) Feb 23 04:16:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45667 DF PROTO=TCP SPT=38282 DPT=9101 SEQ=3339376431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011889430000000001030307) Feb 23 04:16:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18380 DF PROTO=TCP SPT=32772 DPT=9882 SEQ=2579218229 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01188ACF0000000001030307) Feb 23 04:16:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45668 DF PROTO=TCP SPT=38282 DPT=9101 SEQ=3339376431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011891430000000001030307) Feb 23 04:16:44 localhost sshd[146207]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:16:45 localhost python3.9[146252]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:16:46 localhost python3.9[146345]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:16:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28048 DF PROTO=TCP SPT=53286 DPT=9105 SEQ=1255963361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01189D830000000001030307) Feb 23 04:16:47 localhost python3.9[146437]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:16:48 localhost python3.9[146529]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:16:49 localhost python3.9[146619]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 23 04:16:50 localhost python3.9[146709]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:16:51 localhost python3.9[146801]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:16:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1886 DF PROTO=TCP SPT=48542 DPT=9100 SEQ=3503688573 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118AB8E0000000001030307) Feb 23 04:16:51 localhost systemd-logind[759]: Session 46 logged out. Waiting for processes to exit. Feb 23 04:16:51 localhost systemd[1]: session-46.scope: Deactivated successfully. Feb 23 04:16:51 localhost systemd[1]: session-46.scope: Consumed 8.933s CPU time. Feb 23 04:16:51 localhost systemd-logind[759]: Removed session 46. Feb 23 04:16:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1888 DF PROTO=TCP SPT=48542 DPT=9100 SEQ=3503688573 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118B7830000000001030307) Feb 23 04:16:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45670 DF PROTO=TCP SPT=38282 DPT=9101 SEQ=3339376431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118C1830000000001030307) Feb 23 04:16:57 localhost sshd[146818]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:16:57 localhost systemd-logind[759]: New session 47 of user zuul. Feb 23 04:16:57 localhost systemd[1]: Started Session 47 of User zuul. Feb 23 04:16:58 localhost python3.9[146911]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:17:00 localhost python3.9[147007]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:17:01 localhost python3.9[147099]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17033 DF PROTO=TCP SPT=37048 DPT=9105 SEQ=3509137871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118D6130000000001030307) Feb 23 04:17:01 localhost python3.9[147172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838220.7517116-178-138588131926231/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=704aa8bdbea515a6da96c2b63bce412faf6bceda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:02 localhost python3.9[147264]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:17:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17034 DF PROTO=TCP SPT=37048 DPT=9105 SEQ=3509137871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118DA030000000001030307) Feb 23 04:17:03 localhost python3.9[147356]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:03 localhost python3.9[147429]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838222.8140864-249-124741230644876/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=704aa8bdbea515a6da96c2b63bce412faf6bceda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:04 localhost python3.9[147521]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:17:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17035 DF PROTO=TCP SPT=37048 DPT=9105 SEQ=3509137871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118E2040000000001030307) Feb 23 04:17:05 localhost python3.9[147613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:05 localhost python3.9[147686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838224.602966-319-63973197980257/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=704aa8bdbea515a6da96c2b63bce412faf6bceda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:06 localhost python3.9[147778]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:17:07 localhost python3.9[147870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:07 localhost python3.9[147973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838226.5753744-391-25322242669587/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=704aa8bdbea515a6da96c2b63bce412faf6bceda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:08 localhost python3.9[148097]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:17:08 localhost python3.9[148203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17036 DF PROTO=TCP SPT=37048 DPT=9105 SEQ=3509137871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118F1C40000000001030307) Feb 23 04:17:09 localhost python3.9[148277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838228.490248-465-95356349434405/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=704aa8bdbea515a6da96c2b63bce412faf6bceda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:10 localhost python3.9[148369]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:17:10 localhost python3.9[148461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3519 DF PROTO=TCP SPT=40442 DPT=9101 SEQ=4270581888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0118FA800000000001030307) Feb 23 04:17:11 localhost python3.9[148534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838230.3677728-538-120220109774190/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=704aa8bdbea515a6da96c2b63bce412faf6bceda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:11 localhost python3.9[148626]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:17:12 localhost python3.9[148718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:13 localhost python3.9[148791]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838232.136906-607-40803602726039/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=704aa8bdbea515a6da96c2b63bce412faf6bceda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:13 localhost python3.9[148883]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:17:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5754 DF PROTO=TCP SPT=55446 DPT=9102 SEQ=907995503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011905830000000001030307) Feb 23 04:17:14 localhost python3.9[148975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:14 localhost python3.9[149048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838233.9122248-679-59960635813932/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=704aa8bdbea515a6da96c2b63bce412faf6bceda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:15 localhost systemd[1]: session-47.scope: Deactivated successfully. Feb 23 04:17:15 localhost systemd[1]: session-47.scope: Consumed 11.549s CPU time. Feb 23 04:17:15 localhost systemd-logind[759]: Session 47 logged out. Waiting for processes to exit. Feb 23 04:17:15 localhost systemd-logind[759]: Removed session 47. Feb 23 04:17:16 localhost chronyd[139128]: Selected source 216.232.132.18 (pool.ntp.org) Feb 23 04:17:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17037 DF PROTO=TCP SPT=37048 DPT=9105 SEQ=3509137871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011911830000000001030307) Feb 23 04:17:19 localhost sshd[149063]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:17:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54473 DF PROTO=TCP SPT=44848 DPT=9100 SEQ=183277257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011920BF0000000001030307) Feb 23 04:17:21 localhost sshd[149065]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:17:21 localhost systemd-logind[759]: New session 48 of user zuul. Feb 23 04:17:21 localhost systemd[1]: Started Session 48 of User zuul. Feb 23 04:17:22 localhost python3.9[149160]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:22 localhost python3.9[149252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:23 localhost python3.9[149325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838242.2298956-58-256109476337110/.source.conf _original_basename=ceph.conf follow=False checksum=00be6682e39722cc7ebf9f74611435726ea0928d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54475 DF PROTO=TCP SPT=44848 DPT=9100 SEQ=183277257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01192CC30000000001030307) Feb 23 04:17:24 localhost python3.9[149417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:24 localhost python3.9[149490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838243.666417-58-127377828112287/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=bb97f2335ebfccbfb2bd8d50bbb589ce7e034c5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:25 localhost systemd[1]: session-48.scope: Deactivated successfully. Feb 23 04:17:25 localhost systemd[1]: session-48.scope: Consumed 2.149s CPU time. Feb 23 04:17:25 localhost systemd-logind[759]: Session 48 logged out. Waiting for processes to exit. Feb 23 04:17:25 localhost systemd-logind[759]: Removed session 48. Feb 23 04:17:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3523 DF PROTO=TCP SPT=40442 DPT=9101 SEQ=4270581888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011937830000000001030307) Feb 23 04:17:30 localhost sshd[149505]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:17:30 localhost systemd-logind[759]: New session 49 of user zuul. Feb 23 04:17:30 localhost systemd[1]: Started Session 49 of User zuul. Feb 23 04:17:31 localhost python3.9[149598]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:17:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17080 DF PROTO=TCP SPT=51704 DPT=9105 SEQ=4228746474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01194B440000000001030307) Feb 23 04:17:32 localhost python3.9[149694]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:17:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17081 DF PROTO=TCP SPT=51704 DPT=9105 SEQ=4228746474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01194F430000000001030307) Feb 23 04:17:33 localhost python3.9[149786]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:17:34 localhost python3.9[149876]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:17:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17082 DF PROTO=TCP SPT=51704 DPT=9105 SEQ=4228746474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011957430000000001030307) Feb 23 04:17:35 localhost python3.9[149968]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Feb 23 04:17:36 localhost python3.9[150060]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:17:37 localhost python3.9[150114]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:17:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17083 DF PROTO=TCP SPT=51704 DPT=9105 SEQ=4228746474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011967040000000001030307) Feb 23 04:17:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27805 DF PROTO=TCP SPT=33174 DPT=9101 SEQ=3834955945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01196FAF0000000001030307) Feb 23 04:17:41 localhost python3.9[150208]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 23 04:17:43 localhost python3[150303]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Feb 23 04:17:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56177 DF PROTO=TCP SPT=43326 DPT=9882 SEQ=979161222 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01197B840000000001030307) Feb 23 04:17:44 localhost python3.9[150395]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:45 localhost python3.9[150487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:45 localhost python3.9[150535]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:46 localhost python3.9[150627]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:47 localhost python3.9[150675]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.lfhpmy43 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17084 DF PROTO=TCP SPT=51704 DPT=9105 SEQ=4228746474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011987830000000001030307) Feb 23 04:17:47 localhost python3.9[150767]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:48 localhost python3.9[150815]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:48 localhost python3.9[150907]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:17:49 localhost python3[151000]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 23 04:17:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30604 DF PROTO=TCP SPT=33854 DPT=9100 SEQ=3170865834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011995EF0000000001030307) Feb 23 04:17:51 localhost python3.9[151092]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:51 localhost python3.9[151167]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838270.7847064-427-88100771176857/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54478 DF PROTO=TCP SPT=44848 DPT=9100 SEQ=183277257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01199D830000000001030307) Feb 23 04:17:53 localhost python3.9[151259]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:53 localhost python3.9[151334]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838272.6638625-472-72933182302017/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:54 localhost python3.9[151426]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:54 localhost python3.9[151501]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838273.9239926-517-216918051394545/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:55 localhost python3.9[151593]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:56 localhost python3.9[151668]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838275.177975-562-254823928018890/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27809 DF PROTO=TCP SPT=33174 DPT=9101 SEQ=3834955945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0119AB840000000001030307) Feb 23 04:17:57 localhost python3.9[151760]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:17:57 localhost python3.9[151835]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838276.5975444-607-138616043258737/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:58 localhost python3.9[151927]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:17:58 localhost python3.9[152019]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:18:00 localhost python3.9[152114]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:01 localhost python3.9[152206]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:18:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58387 DF PROTO=TCP SPT=46296 DPT=9105 SEQ=3132276143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0119C0730000000001030307) Feb 23 04:18:02 localhost python3.9[152299]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:18:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58388 DF PROTO=TCP SPT=46296 DPT=9105 SEQ=3132276143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0119C4830000000001030307) Feb 23 04:18:03 localhost python3.9[152393]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:18:04 localhost python3.9[152488]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58389 DF PROTO=TCP SPT=46296 DPT=9105 SEQ=3132276143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0119CC830000000001030307) Feb 23 04:18:05 localhost python3.9[152578]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:18:07 localhost python3.9[152671]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005626465.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:6e:1d:57:37" external_ids:ovn-encap-ip=172.19.0.107 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:18:07 localhost ovs-vsctl[152672]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005626465.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:6e:1d:57:37 external_ids:ovn-encap-ip=172.19.0.107 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Feb 23 04:18:08 localhost python3.9[152764]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:18:08 localhost python3.9[152857]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:18:08 localhost sshd[152874]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:18:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58390 DF PROTO=TCP SPT=46296 DPT=9105 SEQ=3132276143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0119DC440000000001030307) Feb 23 04:18:09 localhost python3.9[152983]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:10 localhost python3.9[153106]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:18:10 localhost python3.9[153169]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4285 DF PROTO=TCP SPT=38856 DPT=9101 SEQ=2692567598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0119E4E00000000001030307) Feb 23 04:18:11 localhost python3.9[153261]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:18:12 localhost python3.9[153309]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:12 localhost python3.9[153401]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16555 DF PROTO=TCP SPT=57970 DPT=9102 SEQ=2755498262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0119EF830000000001030307) Feb 23 04:18:14 localhost python3.9[153493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:18:14 localhost python3.9[153541]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:15 localhost python3.9[153633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:18:15 localhost python3.9[153681]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:16 localhost python3.9[153773]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:18:16 localhost systemd[1]: Reloading. Feb 23 04:18:17 localhost systemd-rc-local-generator[153796]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:18:17 localhost systemd-sysv-generator[153801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:18:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:18:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58391 DF PROTO=TCP SPT=46296 DPT=9105 SEQ=3132276143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0119FD830000000001030307) Feb 23 04:18:17 localhost python3.9[153903]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:18:18 localhost python3.9[153951]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:19 localhost python3.9[154043]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:18:19 localhost python3.9[154091]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:20 localhost python3.9[154183]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:18:20 localhost systemd[1]: Reloading. Feb 23 04:18:20 localhost systemd-sysv-generator[154211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:18:20 localhost systemd-rc-local-generator[154207]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:18:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:18:20 localhost systemd[1]: Starting Create netns directory... Feb 23 04:18:20 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 23 04:18:20 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 23 04:18:20 localhost systemd[1]: Finished Create netns directory. Feb 23 04:18:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5018 DF PROTO=TCP SPT=57112 DPT=9100 SEQ=1501864468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A0B1F0000000001030307) Feb 23 04:18:21 localhost python3.9[154317]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:23 localhost python3.9[154409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:18:23 localhost python3.9[154482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838302.7781315-1342-219134972724548/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5020 DF PROTO=TCP SPT=57112 DPT=9100 SEQ=1501864468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A17430000000001030307) Feb 23 04:18:25 localhost python3.9[154574]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:26 localhost python3.9[154666]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4289 DF PROTO=TCP SPT=38856 DPT=9101 SEQ=2692567598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A21830000000001030307) Feb 23 04:18:26 localhost python3.9[154758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:18:27 localhost python3.9[154833]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838306.4037182-1441-124357971350975/.source.json _original_basename=.70yci49u follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:27 localhost python3.9[154923]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:30 localhost python3.9[155176]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Feb 23 04:18:31 localhost python3.9[155268]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 23 04:18:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52616 DF PROTO=TCP SPT=47846 DPT=9105 SEQ=3356764315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A35A30000000001030307) Feb 23 04:18:32 localhost python3[155360]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False Feb 23 04:18:32 localhost python3[155360]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "bfb93be9d83c3121be0312d4d8c02944841d931c726f68b412221913286262d4",#012 "Digest": "sha256:5a01d6902fcff84f31d264784a24433f1266e51e84e70ca3796953855fdec417",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:5a01d6902fcff84f31d264784a24433f1266e51e84e70ca3796953855fdec417"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-02-23T06:34:22.194153324Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.43.0",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260216",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "8419493e1fd846703d277695e03fc5eb",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 347092937,#012 "VirtualSize": 347092937,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/f3afd1cf5e6198a170887a65c5f10af446afae7f60b1c2348209fc3be458dddf/diff:/var/lib/containers/storage/overlay/882df85a0cf43e46bc799aafd5ff81035654b304c2fef5dbd26c9dd0c2e9fcc3/diff:/var/lib/containers/storage/overlay/d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/9ebf51f80a46e835820a271b66c56bf3153d0ad4226e954d9a4e5952244e92d3/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/9ebf51f80a46e835820a271b66c56bf3153d0ad4226e954d9a4e5952244e92d3/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d",#012 "sha256:6eb5d45c6942983139aec78264b4b68bafe46465bb40e2bb4c09e78dad8ba6c0",#012 "sha256:4488e457e941888ff222080c5c98fc98b827e2e0699d850c0a8b0f12f152d8f5",#012 "sha256:bde1ac8945157434308ea323cfa7054085e8af54598c165ad28f8de2052547eb"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.43.0",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260216",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "8419493e1fd846703d277695e03fc5eb",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2026-02-17T01:25:07.246646992Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:d064f128d9bf147a386d5c0e8c2e8a6f698c81fb4e2404e09afe5ef1e1d3b529 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-17T01:25:07.246739119Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260216\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-17T01:25:12.132997501Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-02-23T06:08:39.081651802Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081666472Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081677733Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081688343Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081701553Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081710413Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.413481757Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:09:13.490649497Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:09:16.454967918Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util- Feb 23 04:18:32 localhost podman[155411]: 2026-02-23 09:18:32.798518003 +0000 UTC m=+0.096357971 container remove 393905bf31c6fc380991ff7b7d92a69f88fac97d7b4c798b2036de62bbaafee2 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1766032510, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 23 04:18:32 localhost python3[155360]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Feb 23 04:18:32 localhost podman[155423]: Feb 23 04:18:32 localhost podman[155423]: 2026-02-23 09:18:32.907009472 +0000 UTC m=+0.089385175 container create bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:18:32 localhost podman[155423]: 2026-02-23 09:18:32.862643719 +0000 UTC m=+0.045019492 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Feb 23 04:18:32 localhost python3[155360]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Feb 23 04:18:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52617 DF PROTO=TCP SPT=47846 DPT=9105 SEQ=3356764315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A39C30000000001030307) Feb 23 04:18:34 localhost python3.9[155550]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:18:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52618 DF PROTO=TCP SPT=47846 DPT=9105 SEQ=3356764315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A41C30000000001030307) Feb 23 04:18:35 localhost python3.9[155644]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:36 localhost python3.9[155690]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:18:36 localhost python3.9[155781]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771838316.106045-1675-192230336283562/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:37 localhost python3.9[155827]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:18:37 localhost systemd[1]: Reloading. Feb 23 04:18:37 localhost systemd-rc-local-generator[155853]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:18:37 localhost systemd-sysv-generator[155858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:18:38 localhost python3.9[155910]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:18:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52619 DF PROTO=TCP SPT=47846 DPT=9105 SEQ=3356764315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A51840000000001030307) Feb 23 04:18:39 localhost systemd[1]: Reloading. Feb 23 04:18:39 localhost systemd-sysv-generator[155940]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:18:39 localhost systemd-rc-local-generator[155937]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:18:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:18:39 localhost systemd[1]: Starting ovn_controller container... Feb 23 04:18:39 localhost systemd[1]: Started libcrun container. Feb 23 04:18:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57c53823f60575b63b8ae468c1472506e7ac9a3c1da739c9d001cb4af840d8e5/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Feb 23 04:18:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:18:39 localhost podman[155952]: 2026-02-23 09:18:39.767625659 +0000 UTC m=+0.152770110 container init bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 04:18:39 localhost systemd[1]: tmp-crun.Qvpya3.mount: Deactivated successfully. Feb 23 04:18:39 localhost ovn_controller[155966]: + sudo -E kolla_set_configs Feb 23 04:18:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:18:39 localhost podman[155952]: 2026-02-23 09:18:39.812470845 +0000 UTC m=+0.197615296 container start bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Feb 23 04:18:39 localhost edpm-start-podman-container[155952]: ovn_controller Feb 23 04:18:39 localhost systemd[1]: Created slice User Slice of UID 0. Feb 23 04:18:39 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 23 04:18:39 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 23 04:18:39 localhost systemd[1]: Starting User Manager for UID 0... Feb 23 04:18:39 localhost edpm-start-podman-container[155951]: Creating additional drop-in dependency for "ovn_controller" (bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02) Feb 23 04:18:39 localhost systemd[1]: Reloading. Feb 23 04:18:39 localhost podman[155974]: 2026-02-23 09:18:39.97726401 +0000 UTC m=+0.156995778 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0) Feb 23 04:18:40 localhost podman[155974]: 2026-02-23 09:18:40.019176594 +0000 UTC m=+0.198908372 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:18:40 localhost systemd[155992]: Queued start job for default target Main User Target. Feb 23 04:18:40 localhost systemd[155992]: Created slice User Application Slice. Feb 23 04:18:40 localhost systemd[155992]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 23 04:18:40 localhost systemd[155992]: Started Daily Cleanup of User's Temporary Directories. Feb 23 04:18:40 localhost systemd[155992]: Reached target Paths. Feb 23 04:18:40 localhost systemd[155992]: Reached target Timers. Feb 23 04:18:40 localhost systemd[155992]: Starting D-Bus User Message Bus Socket... Feb 23 04:18:40 localhost systemd[155992]: Starting Create User's Volatile Files and Directories... Feb 23 04:18:40 localhost systemd[155992]: Listening on D-Bus User Message Bus Socket. Feb 23 04:18:40 localhost systemd[155992]: Reached target Sockets. Feb 23 04:18:40 localhost podman[155974]: unhealthy Feb 23 04:18:40 localhost systemd[155992]: Finished Create User's Volatile Files and Directories. Feb 23 04:18:40 localhost systemd[155992]: Reached target Basic System. Feb 23 04:18:40 localhost systemd[155992]: Reached target Main User Target. Feb 23 04:18:40 localhost systemd[155992]: Startup finished in 142ms. Feb 23 04:18:40 localhost systemd-rc-local-generator[156053]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:18:40 localhost systemd-sysv-generator[156056]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:18:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:18:40 localhost systemd[1]: Started User Manager for UID 0. Feb 23 04:18:40 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:18:40 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Failed with result 'exit-code'. Feb 23 04:18:40 localhost systemd-journald[48305]: Field hash table of /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Feb 23 04:18:40 localhost systemd-journald[48305]: /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 04:18:40 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 04:18:40 localhost systemd[1]: Started ovn_controller container. Feb 23 04:18:40 localhost systemd[1]: Started Session c11 of User root. Feb 23 04:18:40 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 04:18:40 localhost ovn_controller[155966]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 23 04:18:40 localhost ovn_controller[155966]: INFO:__main__:Validating config file Feb 23 04:18:40 localhost ovn_controller[155966]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 23 04:18:40 localhost ovn_controller[155966]: INFO:__main__:Writing out command to execute Feb 23 04:18:40 localhost ovn_controller[155966]: ++ cat /run_command Feb 23 04:18:40 localhost systemd[1]: session-c11.scope: Deactivated successfully. Feb 23 04:18:40 localhost ovn_controller[155966]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Feb 23 04:18:40 localhost ovn_controller[155966]: + ARGS= Feb 23 04:18:40 localhost ovn_controller[155966]: + sudo kolla_copy_cacerts Feb 23 04:18:40 localhost systemd[1]: Started Session c12 of User root. Feb 23 04:18:40 localhost systemd[1]: session-c12.scope: Deactivated successfully. Feb 23 04:18:40 localhost ovn_controller[155966]: + [[ ! -n '' ]] Feb 23 04:18:40 localhost ovn_controller[155966]: + . kolla_extend_start Feb 23 04:18:40 localhost ovn_controller[155966]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Feb 23 04:18:40 localhost ovn_controller[155966]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Feb 23 04:18:40 localhost ovn_controller[155966]: + umask 0022 Feb 23 04:18:40 localhost ovn_controller[155966]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8] Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00004|main|INFO|OVS IDL reconnected, force recompute. Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00011|features|INFO|OVS Feature: ct_zero_snat, state: supported Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00012|features|INFO|OVS Feature: ct_flush, state: supported Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00014|main|INFO|OVS feature set changed, force recompute. Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute. Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00021|main|INFO|OVS feature set changed, force recompute. Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 23 04:18:40 localhost ovn_controller[155966]: 2026-02-23T09:18:40Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 23 04:18:41 localhost python3.9[156165]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 23 04:18:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56736 DF PROTO=TCP SPT=59982 DPT=9101 SEQ=3029610112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A5A100000000001030307) Feb 23 04:18:42 localhost python3.9[156257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:18:42 localhost python3.9[156330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838321.758705-1810-62394632538796/.source.yaml _original_basename=.212eez7e follow=False checksum=181037f60084fed8e752a93376456c5747d0788c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:18:43 localhost python3.9[156422]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:18:43 localhost ovs-vsctl[156423]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Feb 23 04:18:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19977 DF PROTO=TCP SPT=37518 DPT=9102 SEQ=3609814710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A65840000000001030307) Feb 23 04:18:44 localhost python3.9[156515]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:18:44 localhost ovs-vsctl[156517]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Feb 23 04:18:45 localhost python3.9[156610]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:18:45 localhost ovs-vsctl[156611]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Feb 23 04:18:47 localhost systemd[1]: session-49.scope: Deactivated successfully. Feb 23 04:18:47 localhost systemd[1]: session-49.scope: Consumed 40.974s CPU time. Feb 23 04:18:47 localhost systemd-logind[759]: Session 49 logged out. Waiting for processes to exit. Feb 23 04:18:47 localhost systemd-logind[759]: Removed session 49. Feb 23 04:18:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52620 DF PROTO=TCP SPT=47846 DPT=9105 SEQ=3356764315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A71830000000001030307) Feb 23 04:18:50 localhost systemd[1]: Stopping User Manager for UID 0... Feb 23 04:18:50 localhost systemd[155992]: Activating special unit Exit the Session... Feb 23 04:18:50 localhost systemd[155992]: Stopped target Main User Target. Feb 23 04:18:50 localhost systemd[155992]: Stopped target Basic System. Feb 23 04:18:50 localhost systemd[155992]: Stopped target Paths. Feb 23 04:18:50 localhost systemd[155992]: Stopped target Sockets. Feb 23 04:18:50 localhost systemd[155992]: Stopped target Timers. Feb 23 04:18:50 localhost systemd[155992]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 04:18:50 localhost systemd[155992]: Closed D-Bus User Message Bus Socket. Feb 23 04:18:50 localhost systemd[155992]: Stopped Create User's Volatile Files and Directories. Feb 23 04:18:50 localhost systemd[155992]: Removed slice User Application Slice. Feb 23 04:18:50 localhost systemd[155992]: Reached target Shutdown. Feb 23 04:18:50 localhost systemd[155992]: Finished Exit the Session. Feb 23 04:18:50 localhost systemd[155992]: Reached target Exit the Session. Feb 23 04:18:50 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 23 04:18:50 localhost systemd[1]: Stopped User Manager for UID 0. Feb 23 04:18:50 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 23 04:18:50 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 23 04:18:50 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 23 04:18:50 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 23 04:18:50 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 23 04:18:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5758 DF PROTO=TCP SPT=40432 DPT=9100 SEQ=2648473299 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A804F0000000001030307) Feb 23 04:18:52 localhost sshd[156629]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:18:52 localhost systemd-logind[759]: New session 51 of user zuul. Feb 23 04:18:52 localhost systemd[1]: Started Session 51 of User zuul. Feb 23 04:18:53 localhost python3.9[156722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:18:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5760 DF PROTO=TCP SPT=40432 DPT=9100 SEQ=2648473299 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A8C440000000001030307) Feb 23 04:18:54 localhost python3.9[156818]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:55 localhost python3.9[156910]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:56 localhost python3.9[157002]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56740 DF PROTO=TCP SPT=59982 DPT=9101 SEQ=3029610112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011A95830000000001030307) Feb 23 04:18:56 localhost python3.9[157094]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:57 localhost python3.9[157186]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:18:58 localhost sshd[157201]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:18:58 localhost python3.9[157279]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:18:59 localhost python3.9[157371]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Feb 23 04:19:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:19:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5036 writes, 22K keys, 5036 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5036 writes, 634 syncs, 7.94 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.010 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55907f80e2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55907f80e2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdo Feb 23 04:19:00 localhost python3.9[157461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:01 localhost python3.9[157534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838340.1816268-214-244570552467995/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53817 DF PROTO=TCP SPT=35118 DPT=9105 SEQ=3248134050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011AAAD30000000001030307) Feb 23 04:19:02 localhost python3.9[157624]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:02 localhost python3.9[157697]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838341.6499789-259-231079229561427/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53818 DF PROTO=TCP SPT=35118 DPT=9105 SEQ=3248134050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011AAEC30000000001030307) Feb 23 04:19:03 localhost python3.9[157789]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:19:04 localhost python3.9[157843]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:19:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:19:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5650 writes, 24K keys, 5650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5650 writes, 811 syncs, 6.97 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d239862d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d239862d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Feb 23 04:19:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53819 DF PROTO=TCP SPT=35118 DPT=9105 SEQ=3248134050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011AB6C30000000001030307) Feb 23 04:19:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53820 DF PROTO=TCP SPT=35118 DPT=9105 SEQ=3248134050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011AC6830000000001030307) Feb 23 04:19:09 localhost python3.9[157937]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 23 04:19:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:19:10 localhost podman[158030]: 2026-02-23 09:19:10.698654193 +0000 UTC m=+0.096232896 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:19:10 localhost ovn_controller[155966]: 2026-02-23T09:19:10Z|00023|memory|INFO|17912 kB peak resident set size after 30.3 seconds Feb 23 04:19:10 localhost ovn_controller[155966]: 2026-02-23T09:19:10Z|00024|memory|INFO|idl-cells-OVN_Southbound:4072 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:9 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3 Feb 23 04:19:10 localhost python3.9[158057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:10 localhost podman[158030]: 2026-02-23 09:19:10.80887902 +0000 UTC m=+0.206457733 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:19:10 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:19:11 localhost python3.9[158170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838350.4015625-370-112904478370402/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11980 DF PROTO=TCP SPT=44910 DPT=9101 SEQ=1233188342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011ACF400000000001030307) Feb 23 04:19:11 localhost python3.9[158277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:12 localhost python3.9[158348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838351.3688571-370-23673615852306/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:13 localhost python3.9[158453]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:13 localhost python3.9[158524]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838353.0311844-502-153359381775415/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11982 DF PROTO=TCP SPT=44910 DPT=9101 SEQ=1233188342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011ADB430000000001030307) Feb 23 04:19:14 localhost python3.9[158614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:15 localhost python3.9[158685]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838354.122419-502-137124657393756/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:15 localhost python3.9[158775]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:19:16 localhost python3.9[158869]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25065 DF PROTO=TCP SPT=56768 DPT=9882 SEQ=2103005154 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011AE5830000000001030307) Feb 23 04:19:17 localhost python3.9[158961]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:17 localhost python3.9[159009]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:18 localhost python3.9[159101]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:19 localhost python3.9[159149]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:19 localhost python3.9[159241]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12691 DF PROTO=TCP SPT=58624 DPT=9100 SEQ=2208624888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011AF57E0000000001030307) Feb 23 04:19:21 localhost python3.9[159333]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:22 localhost python3.9[159381]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:22 localhost python3.9[159473]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:23 localhost python3.9[159521]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12693 DF PROTO=TCP SPT=58624 DPT=9100 SEQ=2208624888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B01830000000001030307) Feb 23 04:19:24 localhost python3.9[159613]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:19:24 localhost systemd[1]: Reloading. Feb 23 04:19:24 localhost systemd-sysv-generator[159639]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:19:24 localhost systemd-rc-local-generator[159635]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:19:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:19:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11984 DF PROTO=TCP SPT=44910 DPT=9101 SEQ=1233188342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B0B830000000001030307) Feb 23 04:19:26 localhost python3.9[159742]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:27 localhost python3.9[159790]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:27 localhost python3.9[159882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:28 localhost python3.9[159930]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:29 localhost python3.9[160022]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:19:29 localhost systemd[1]: Reloading. Feb 23 04:19:29 localhost systemd-sysv-generator[160050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:19:29 localhost systemd-rc-local-generator[160047]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:19:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:19:29 localhost systemd[1]: Starting Create netns directory... Feb 23 04:19:29 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 23 04:19:29 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 23 04:19:29 localhost systemd[1]: Finished Create netns directory. Feb 23 04:19:30 localhost python3.9[160158]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48909 DF PROTO=TCP SPT=44290 DPT=9105 SEQ=2779831199 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B20040000000001030307) Feb 23 04:19:32 localhost python3.9[160250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:32 localhost python3.9[160323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838371.5498278-955-275410203734339/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48910 DF PROTO=TCP SPT=44290 DPT=9105 SEQ=2779831199 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B24040000000001030307) Feb 23 04:19:34 localhost python3.9[160415]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:34 localhost python3.9[160507]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:19:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48911 DF PROTO=TCP SPT=44290 DPT=9105 SEQ=2779831199 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B2C030000000001030307) Feb 23 04:19:35 localhost python3.9[160599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:36 localhost python3.9[160674]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838375.0194652-1054-214928845408202/.source.json _original_basename=.jwvx2ynq follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:36 localhost python3.9[160764]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:38 localhost python3.9[161017]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Feb 23 04:19:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48912 DF PROTO=TCP SPT=44290 DPT=9105 SEQ=2779831199 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B3BC30000000001030307) Feb 23 04:19:39 localhost python3.9[161109]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 23 04:19:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:19:40 localhost python3[161201]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False Feb 23 04:19:41 localhost podman[161202]: 2026-02-23 09:19:41.011739745 +0000 UTC m=+0.083098283 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:19:41 localhost podman[161202]: 2026-02-23 09:19:41.043783169 +0000 UTC m=+0.115141687 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 04:19:41 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:19:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22705 DF PROTO=TCP SPT=32948 DPT=9101 SEQ=341511012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B44700000000001030307) Feb 23 04:19:41 localhost python3[161201]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "76bd0f417d97eed33ec13822a6468fff2a43c066ff0ef717fb226d4a1fc97b17",#012 "Digest": "sha256:0a8901bdd982c4ba62e40905edf375097daf8fd968b1839b56832f37354d5b07",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:0a8901bdd982c4ba62e40905edf375097daf8fd968b1839b56832f37354d5b07"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-02-23T06:26:05.098634295Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.43.0",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260216",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "8419493e1fd846703d277695e03fc5eb",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 784228609,#012 "VirtualSize": 784228609,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/4ca138c1babff33aa47b0f593cc672ab03770d4205069570de2d0e7691f07ed3/diff:/var/lib/containers/storage/overlay/7a6a75b4bc44910de031f240cbd770d29244a190eb01a1840ff2078eb2d894ad/diff:/var/lib/containers/storage/overlay/0455f1f13172510bfb03afa514ad1dc5f28a2039a4c0ae85e44e0cde63814ca4/diff:/var/lib/containers/storage/overlay/882df85a0cf43e46bc799aafd5ff81035654b304c2fef5dbd26c9dd0c2e9fcc3/diff:/var/lib/containers/storage/overlay/d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/5bf4078070f41854870417452ad68470796913522011b663ed0d8d22a6f27928/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/5bf4078070f41854870417452ad68470796913522011b663ed0d8d22a6f27928/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d",#012 "sha256:6eb5d45c6942983139aec78264b4b68bafe46465bb40e2bb4c09e78dad8ba6c0",#012 "sha256:9a59f9675e4fdfdb0eaa24dcce26bed374feef6430ea888b6f5ef1274a95bd90",#012 "sha256:28e68e9ecec07805a02cd85d7efe631108e3186cd82263ab9cb109564a3435f5",#012 "sha256:2c8b50875d9f0980f38972811e1dbbc8e64c448e40a8be21ff8837be00cf89ab",#012 "sha256:2782735a76d8db3e6692125b10fd55ced9f8590ef8ae6abf986ddc10f33757f4"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.43.0",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260216",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "8419493e1fd846703d277695e03fc5eb",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2026-02-17T01:25:07.246646992Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:d064f128d9bf147a386d5c0e8c2e8a6f698c81fb4e2404e09afe5ef1e1d3b529 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-17T01:25:07.246739119Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260216\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-17T01:25:12.132997501Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-02-23T06:08:39.081651802Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081666472Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081677733Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081688343Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081701553Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081710413Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.413481757Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:09:13.490649497Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.con Feb 23 04:19:41 localhost podman[161279]: 2026-02-23 09:19:41.419668388 +0000 UTC m=+0.088829586 container remove 71d845f105ded9b429ceed89059848ddb210ac4cef479e84cc80a2336299c879 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c586877f5206c4d4c0260095c70d518d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, batch=17.1_20260112.1) Feb 23 04:19:41 localhost python3[161201]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Feb 23 04:19:41 localhost podman[161291]: Feb 23 04:19:41 localhost podman[161291]: 2026-02-23 09:19:41.524687941 +0000 UTC m=+0.086423322 container create 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0) Feb 23 04:19:41 localhost podman[161291]: 2026-02-23 09:19:41.482938375 +0000 UTC m=+0.044674396 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 23 04:19:41 localhost python3[161201]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 23 04:19:42 localhost python3.9[161421]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:19:43 localhost python3.9[161515]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:43 localhost python3.9[161561]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:19:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41868 DF PROTO=TCP SPT=55420 DPT=9102 SEQ=3478947674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B4F830000000001030307) Feb 23 04:19:44 localhost python3.9[161652]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771838383.9902668-1288-168717489092084/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:45 localhost python3.9[161698]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:19:45 localhost systemd[1]: Reloading. Feb 23 04:19:45 localhost systemd-rc-local-generator[161725]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:19:45 localhost systemd-sysv-generator[161729]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:19:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:19:46 localhost python3.9[161780]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:19:46 localhost systemd[1]: Reloading. Feb 23 04:19:46 localhost sshd[161784]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:19:46 localhost systemd-rc-local-generator[161806]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:19:46 localhost systemd-sysv-generator[161809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:19:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:19:46 localhost systemd[1]: Starting ovn_metadata_agent container... Feb 23 04:19:46 localhost systemd[1]: Started libcrun container. Feb 23 04:19:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a4981e591c9e2f951cc8469923cb0b5f981fc5db6c81d83471601bab69834f/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 23 04:19:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a4981e591c9e2f951cc8469923cb0b5f981fc5db6c81d83471601bab69834f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:19:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:19:46 localhost podman[161824]: 2026-02-23 09:19:46.644547812 +0000 UTC m=+0.152367481 container init 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216) Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: + sudo -E kolla_set_configs Feb 23 04:19:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:19:46 localhost podman[161824]: 2026-02-23 09:19:46.675832046 +0000 UTC m=+0.183651685 container start 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:19:46 localhost edpm-start-podman-container[161824]: ovn_metadata_agent Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Validating config file Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Copying service configuration files Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Writing out command to execute Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: ++ cat /run_command Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: + CMD=neutron-ovn-metadata-agent Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: + ARGS= Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: + sudo kolla_copy_cacerts Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: Running command: 'neutron-ovn-metadata-agent' Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: + [[ ! -n '' ]] Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: + . kolla_extend_start Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: + umask 0022 Feb 23 04:19:46 localhost ovn_metadata_agent[161837]: + exec neutron-ovn-metadata-agent Feb 23 04:19:46 localhost edpm-start-podman-container[161823]: Creating additional drop-in dependency for "ovn_metadata_agent" (2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db) Feb 23 04:19:46 localhost systemd[1]: Reloading. Feb 23 04:19:46 localhost podman[161844]: 2026-02-23 09:19:46.828608117 +0000 UTC m=+0.145561618 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS) Feb 23 04:19:46 localhost podman[161844]: 2026-02-23 09:19:46.836818148 +0000 UTC m=+0.153771699 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0) Feb 23 04:19:46 localhost systemd-sysv-generator[161916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:19:46 localhost systemd-rc-local-generator[161913]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:19:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:19:47 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:19:47 localhost systemd[1]: Started ovn_metadata_agent container. Feb 23 04:19:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48913 DF PROTO=TCP SPT=44290 DPT=9105 SEQ=2779831199 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B5B840000000001030307) Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.243 161842 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.243 161842 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.243 161842 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.244 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.244 161842 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.244 161842 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.244 161842 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.244 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.244 161842 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.244 161842 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.245 161842 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.245 161842 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.245 161842 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.245 161842 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.245 161842 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.245 161842 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.245 161842 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.245 161842 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.245 161842 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.245 161842 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.246 161842 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.247 161842 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.247 161842 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.247 161842 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.247 161842 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.247 161842 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.247 161842 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.247 161842 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.247 161842 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005626465.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.247 161842 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.248 161842 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.249 161842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.250 161842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.250 161842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.250 161842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.250 161842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.250 161842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.250 161842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.250 161842 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.250 161842 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.250 161842 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.251 161842 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.252 161842 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.252 161842 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.252 161842 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.252 161842 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.252 161842 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.252 161842 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.252 161842 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.252 161842 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.252 161842 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.252 161842 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.253 161842 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.254 161842 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.254 161842 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.254 161842 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.254 161842 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.254 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.254 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.254 161842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.254 161842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.254 161842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.254 161842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.255 161842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.255 161842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.255 161842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.255 161842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.255 161842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.255 161842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.255 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.255 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.255 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.256 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.256 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.256 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.256 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.256 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.256 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.256 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.256 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.256 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.256 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.257 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.257 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.257 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.257 161842 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.257 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.257 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.257 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.257 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.257 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.257 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.258 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.258 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.258 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.258 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.258 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.258 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.258 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.258 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.258 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.258 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.259 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.260 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.260 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.260 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.260 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.260 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.260 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.260 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.260 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.260 161842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.261 161842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.262 161842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.262 161842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.262 161842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.262 161842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.262 161842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.262 161842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.262 161842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.262 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.262 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.262 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.263 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.263 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.263 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.263 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.263 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.263 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.263 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.263 161842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.263 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.263 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.264 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.264 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.264 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.264 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.264 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.264 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.264 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.264 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.264 161842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.264 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.265 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.265 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.265 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.265 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.265 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.265 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.265 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.265 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.265 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.265 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.266 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.267 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.267 161842 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.267 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.267 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.267 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.267 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.267 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.267 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.267 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.267 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.268 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.268 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.268 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.268 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.268 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.268 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.268 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.268 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.268 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.269 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.269 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.269 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.269 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.269 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.269 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.269 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.269 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.269 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.269 161842 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.270 161842 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.270 161842 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.270 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.270 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.270 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.270 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.270 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.270 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.270 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.270 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.271 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.271 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.271 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.271 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.271 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.271 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.271 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.271 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.271 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.271 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.272 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.272 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.272 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.272 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.272 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.272 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.272 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.272 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.272 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.272 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.273 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.273 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.273 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.273 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.273 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.273 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.273 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.273 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.273 161842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.273 161842 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.282 161842 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.282 161842 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.282 161842 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.282 161842 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.282 161842 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.296 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name a05de4d1-e729-4c33-bedf-496279b1b686 (UUID: a05de4d1-e729-4c33-bedf-496279b1b686) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.312 161842 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.313 161842 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.313 161842 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.313 161842 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.316 161842 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.320 161842 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.327 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'a05de4d1-e729-4c33-bedf-496279b1b686'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': '5f930793-fee0-5535-8365-1c5d0bc9029c', 'neutron:ovn-metadata-sb-cfg': '2'}, name=a05de4d1-e729-4c33-bedf-496279b1b686, nb_cfg_timestamp=1771838328433, nb_cfg=5) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.328 161842 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.329 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.329 161842 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.329 161842 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.329 161842 INFO oslo_service.service [-] Starting 1 workers#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.332 161842 DEBUG oslo_service.service [-] Started child 161941 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.334 161842 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp_l6mn7l1/privsep.sock']#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.335 161941 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-443249'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.360 161941 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.361 161941 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.361 161941 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.364 161941 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.365 161941 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.378 161941 INFO eventlet.wsgi.server [-] (161941) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.931 161842 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.932 161842 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_l6mn7l1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.826 161946 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.831 161946 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.835 161946 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.835 161946 INFO oslo.privsep.daemon [-] privsep daemon running as pid 161946#033[00m Feb 23 04:19:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:48.935 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[ab9b0b60-7969-4284-aaae-38d238fcbc01]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.347 161946 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.347 161946 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.347 161946 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.795 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8377df-d678-40ac-acd3-cf4b2d078e34]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.798 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, column=external_ids, values=({'neutron:ovn-metadata-id': '5f930793-fee0-5535-8365-1c5d0bc9029c'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.799 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.800 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.813 161842 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.813 161842 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.813 161842 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.814 161842 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.814 161842 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.814 161842 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.814 161842 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.814 161842 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.815 161842 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.815 161842 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.815 161842 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.815 161842 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.816 161842 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.816 161842 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.816 161842 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.816 161842 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.817 161842 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.817 161842 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.817 161842 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.817 161842 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.817 161842 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.818 161842 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.818 161842 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.818 161842 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.818 161842 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.819 161842 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.819 161842 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.819 161842 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.819 161842 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.820 161842 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.820 161842 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.820 161842 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.820 161842 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.820 161842 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.821 161842 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.821 161842 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.821 161842 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.821 161842 DEBUG oslo_service.service [-] host = np0005626465.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.822 161842 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.822 161842 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.822 161842 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.822 161842 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.823 161842 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.823 161842 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.823 161842 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.823 161842 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.823 161842 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.824 161842 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.824 161842 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.824 161842 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.824 161842 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.824 161842 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.825 161842 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.825 161842 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.825 161842 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.826 161842 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.826 161842 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.827 161842 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.827 161842 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.827 161842 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.827 161842 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.828 161842 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.828 161842 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.828 161842 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.828 161842 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.828 161842 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.829 161842 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.829 161842 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.829 161842 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.829 161842 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.830 161842 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.830 161842 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.830 161842 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.830 161842 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.830 161842 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.831 161842 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.831 161842 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.831 161842 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.831 161842 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.832 161842 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.832 161842 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.832 161842 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.832 161842 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.832 161842 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.833 161842 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.833 161842 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.833 161842 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.833 161842 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.833 161842 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.834 161842 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.834 161842 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.834 161842 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.834 161842 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.834 161842 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.835 161842 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.835 161842 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.835 161842 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.835 161842 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.835 161842 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.836 161842 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.836 161842 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.836 161842 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.836 161842 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.836 161842 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.837 161842 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.837 161842 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.837 161842 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.837 161842 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.838 161842 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.838 161842 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.838 161842 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.839 161842 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.839 161842 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.839 161842 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.839 161842 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.840 161842 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.840 161842 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.840 161842 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.840 161842 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.841 161842 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.841 161842 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.841 161842 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.842 161842 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.842 161842 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.842 161842 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.842 161842 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.842 161842 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.843 161842 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.843 161842 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.843 161842 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.843 161842 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.843 161842 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.843 161842 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.844 161842 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.844 161842 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.844 161842 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.844 161842 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.844 161842 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.844 161842 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.845 161842 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.845 161842 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.845 161842 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.845 161842 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.845 161842 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.845 161842 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.846 161842 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.846 161842 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.846 161842 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.846 161842 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.846 161842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.846 161842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.846 161842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.847 161842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.847 161842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.847 161842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.847 161842 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.847 161842 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.847 161842 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.847 161842 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.847 161842 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.848 161842 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.848 161842 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.848 161842 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.848 161842 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.848 161842 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.848 161842 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.849 161842 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.849 161842 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.849 161842 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.849 161842 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.849 161842 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.849 161842 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.849 161842 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.850 161842 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.850 161842 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.850 161842 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.850 161842 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.850 161842 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.850 161842 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.850 161842 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.851 161842 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.851 161842 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.851 161842 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.851 161842 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.851 161842 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.851 161842 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.851 161842 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.852 161842 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.852 161842 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.852 161842 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.852 161842 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.852 161842 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.852 161842 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.852 161842 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.853 161842 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.853 161842 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.853 161842 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.853 161842 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.853 161842 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.853 161842 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.854 161842 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.854 161842 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.854 161842 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.854 161842 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.854 161842 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.854 161842 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.854 161842 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.854 161842 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.855 161842 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.855 161842 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.855 161842 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.855 161842 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.855 161842 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.855 161842 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.856 161842 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.856 161842 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.856 161842 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.856 161842 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.856 161842 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.856 161842 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.856 161842 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.857 161842 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.857 161842 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.857 161842 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.857 161842 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.857 161842 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.857 161842 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.857 161842 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.858 161842 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.858 161842 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.858 161842 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.858 161842 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.858 161842 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.858 161842 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.858 161842 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.858 161842 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.859 161842 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.859 161842 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.859 161842 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.859 161842 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.859 161842 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.859 161842 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.859 161842 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.860 161842 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.860 161842 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.860 161842 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.860 161842 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.860 161842 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.860 161842 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.860 161842 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.861 161842 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.861 161842 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.861 161842 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.861 161842 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.861 161842 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.861 161842 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.862 161842 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.862 161842 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.862 161842 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.862 161842 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.862 161842 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.862 161842 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.862 161842 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.863 161842 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.863 161842 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.863 161842 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.863 161842 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.863 161842 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.863 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.863 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.864 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.864 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.864 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.864 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.864 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.864 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.864 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.865 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.865 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.865 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.865 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.865 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.866 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.866 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.866 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.866 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.866 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.867 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.867 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.867 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.867 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.867 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.867 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.867 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.868 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.868 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.868 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.868 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.868 161842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.868 161842 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.869 161842 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.869 161842 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.869 161842 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:19:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:19:49.869 161842 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 23 04:19:49 localhost python3.9[162026]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 23 04:19:51 localhost python3.9[162118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:19:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30663 DF PROTO=TCP SPT=50460 DPT=9100 SEQ=2909330446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B6AAF0000000001030307) Feb 23 04:19:51 localhost python3.9[162193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838390.590147-1423-213749623065410/.source.yaml _original_basename=.2nrx6pj7 follow=False checksum=5e9d1f3425ea21486875902a84faa4fb54cf7178 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:19:52 localhost systemd[1]: session-51.scope: Deactivated successfully. Feb 23 04:19:52 localhost systemd[1]: session-51.scope: Consumed 32.326s CPU time. Feb 23 04:19:52 localhost systemd-logind[759]: Session 51 logged out. Waiting for processes to exit. Feb 23 04:19:52 localhost systemd-logind[759]: Removed session 51. Feb 23 04:19:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30665 DF PROTO=TCP SPT=50460 DPT=9100 SEQ=2909330446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B76C30000000001030307) Feb 23 04:19:56 localhost sshd[162208]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:19:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22709 DF PROTO=TCP SPT=32948 DPT=9101 SEQ=341511012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B81840000000001030307) Feb 23 04:19:58 localhost sshd[162210]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:19:58 localhost systemd-logind[759]: New session 52 of user zuul. Feb 23 04:19:58 localhost systemd[1]: Started Session 52 of User zuul. Feb 23 04:19:59 localhost python3.9[162303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:20:00 localhost python3.9[162399]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:20:01 localhost python3.9[162504]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:20:01 localhost systemd[1]: libpod-6e90a2d81f48f74065f9647e39f4f6a4d49a769228de6b6f7b0d1b32943c83f9.scope: Deactivated successfully. Feb 23 04:20:01 localhost podman[162505]: 2026-02-23 09:20:01.511319292 +0000 UTC m=+0.072436755 container died 6e90a2d81f48f74065f9647e39f4f6a4d49a769228de6b6f7b0d1b32943c83f9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, maintainer=OpenStack TripleO Team, version=17.1.13, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 23 04:20:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e90a2d81f48f74065f9647e39f4f6a4d49a769228de6b6f7b0d1b32943c83f9-userdata-shm.mount: Deactivated successfully. Feb 23 04:20:01 localhost podman[162505]: 2026-02-23 09:20:01.53906187 +0000 UTC m=+0.100179323 container cleanup 6e90a2d81f48f74065f9647e39f4f6a4d49a769228de6b6f7b0d1b32943c83f9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public) Feb 23 04:20:01 localhost systemd[1]: var-lib-containers-storage-overlay-bf88b2818e1f9a11fb9fd8baf45bbb8189e63ee5a93830a36886c8def5f28b71-merged.mount: Deactivated successfully. Feb 23 04:20:01 localhost podman[162520]: 2026-02-23 09:20:01.604056773 +0000 UTC m=+0.085036784 container remove 6e90a2d81f48f74065f9647e39f4f6a4d49a769228de6b6f7b0d1b32943c83f9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64) Feb 23 04:20:01 localhost systemd[1]: libpod-conmon-6e90a2d81f48f74065f9647e39f4f6a4d49a769228de6b6f7b0d1b32943c83f9.scope: Deactivated successfully. Feb 23 04:20:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64937 DF PROTO=TCP SPT=54782 DPT=9105 SEQ=24843052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B95340000000001030307) Feb 23 04:20:02 localhost python3.9[162626]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:20:02 localhost systemd[1]: Reloading. Feb 23 04:20:02 localhost systemd-rc-local-generator[162647]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:20:02 localhost systemd-sysv-generator[162651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:20:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:20:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64938 DF PROTO=TCP SPT=54782 DPT=9105 SEQ=24843052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011B99430000000001030307) Feb 23 04:20:03 localhost python3.9[162752]: ansible-ansible.builtin.service_facts Invoked Feb 23 04:20:03 localhost network[162769]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 04:20:03 localhost network[162770]: 'network-scripts' will be removed from distribution in near future. Feb 23 04:20:03 localhost network[162771]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 04:20:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64939 DF PROTO=TCP SPT=54782 DPT=9105 SEQ=24843052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011BA1430000000001030307) Feb 23 04:20:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:20:08 localhost python3.9[162973]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:20:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64940 DF PROTO=TCP SPT=54782 DPT=9105 SEQ=24843052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011BB1030000000001030307) Feb 23 04:20:10 localhost systemd[1]: Reloading. Feb 23 04:20:10 localhost systemd-rc-local-generator[162998]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:20:10 localhost systemd-sysv-generator[163001]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:20:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:20:10 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Feb 23 04:20:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3250 DF PROTO=TCP SPT=57376 DPT=9101 SEQ=2128268722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011BB99F0000000001030307) Feb 23 04:20:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:20:11 localhost systemd[1]: tmp-crun.iYdI4j.mount: Deactivated successfully. Feb 23 04:20:11 localhost podman[163106]: 2026-02-23 09:20:11.470057828 +0000 UTC m=+0.112835944 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:20:11 localhost podman[163106]: 2026-02-23 09:20:11.550392295 +0000 UTC m=+0.193170411 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:20:11 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:20:11 localhost python3.9[163105]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:20:12 localhost python3.9[163223]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:20:14 localhost python3.9[163393]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:20:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58794 DF PROTO=TCP SPT=52844 DPT=9882 SEQ=3513479386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011BC5840000000001030307) Feb 23 04:20:15 localhost python3.9[163486]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:20:16 localhost python3.9[163579]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:20:17 localhost python3.9[163672]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:20:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:20:17 localhost podman[163674]: 2026-02-23 09:20:17.298364896 +0000 UTC m=+0.065901869 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216) Feb 23 04:20:17 localhost podman[163674]: 2026-02-23 09:20:17.307709078 +0000 UTC m=+0.075246061 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:20:17 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:20:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64941 DF PROTO=TCP SPT=54782 DPT=9105 SEQ=24843052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011BD1830000000001030307) Feb 23 04:20:18 localhost python3.9[163782]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:18 localhost python3.9[163874]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:19 localhost python3.9[163966]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:19 localhost python3.9[164058]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:20 localhost python3.9[164150]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:21 localhost python3.9[164242]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49692 DF PROTO=TCP SPT=37468 DPT=9100 SEQ=2716296784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011BDFDE0000000001030307) Feb 23 04:20:22 localhost python3.9[164334]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:23 localhost python3.9[164426]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30668 DF PROTO=TCP SPT=50460 DPT=9100 SEQ=2909330446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011BE7840000000001030307) Feb 23 04:20:24 localhost python3.9[164518]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:24 localhost python3.9[164610]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:25 localhost python3.9[164702]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:25 localhost python3.9[164794]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:26 localhost python3.9[164886]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3254 DF PROTO=TCP SPT=57376 DPT=9101 SEQ=2128268722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011BF5840000000001030307) Feb 23 04:20:27 localhost python3.9[164978]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:20:28 localhost python3.9[165070]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:20:28 localhost python3.9[165162]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 23 04:20:29 localhost sshd[165200]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:20:29 localhost python3.9[165255]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:20:29 localhost systemd[1]: Reloading. Feb 23 04:20:30 localhost systemd-sysv-generator[165283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:20:30 localhost systemd-rc-local-generator[165280]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:20:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:20:30 localhost python3.9[165383]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:20:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5222 DF PROTO=TCP SPT=34134 DPT=9105 SEQ=495197662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C0A630000000001030307) Feb 23 04:20:32 localhost python3.9[165476]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:20:32 localhost python3.9[165569]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:20:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5223 DF PROTO=TCP SPT=34134 DPT=9105 SEQ=495197662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C0E830000000001030307) Feb 23 04:20:34 localhost python3.9[165662]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:20:34 localhost python3.9[165755]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:20:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5224 DF PROTO=TCP SPT=34134 DPT=9105 SEQ=495197662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C16830000000001030307) Feb 23 04:20:35 localhost python3.9[165848]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:20:35 localhost sshd[165909]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:20:36 localhost python3.9[165943]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:20:38 localhost python3.9[166036]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Feb 23 04:20:39 localhost python3.9[166129]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Feb 23 04:20:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5225 DF PROTO=TCP SPT=34134 DPT=9105 SEQ=495197662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C26430000000001030307) Feb 23 04:20:39 localhost python3.9[166227]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005626465.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Feb 23 04:20:41 localhost python3.9[166327]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:20:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31190 DF PROTO=TCP SPT=34066 DPT=9101 SEQ=2345741730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C2ED00000000001030307) Feb 23 04:20:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:20:41 localhost podman[166382]: 2026-02-23 09:20:41.973799863 +0000 UTC m=+0.083900583 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:20:42 localhost podman[166382]: 2026-02-23 09:20:42.0129736 +0000 UTC m=+0.123074360 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0) Feb 23 04:20:42 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:20:42 localhost python3.9[166381]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:20:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7925 DF PROTO=TCP SPT=54698 DPT=9102 SEQ=796795543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C39830000000001030307) Feb 23 04:20:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5226 DF PROTO=TCP SPT=34134 DPT=9105 SEQ=495197662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C47830000000001030307) Feb 23 04:20:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:20:48 localhost systemd[1]: tmp-crun.jKhEN0.mount: Deactivated successfully. Feb 23 04:20:48 localhost podman[166472]: 2026-02-23 09:20:48.017423839 +0000 UTC m=+0.083592624 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:20:48 localhost podman[166472]: 2026-02-23 09:20:48.049872988 +0000 UTC m=+0.116041793 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:20:48 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:20:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:20:48.275 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:20:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:20:48.275 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:20:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:20:48.276 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:20:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52929 DF PROTO=TCP SPT=51686 DPT=9100 SEQ=1446356080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C55100000000001030307) Feb 23 04:20:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52931 DF PROTO=TCP SPT=51686 DPT=9100 SEQ=1446356080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C61030000000001030307) Feb 23 04:20:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31194 DF PROTO=TCP SPT=34066 DPT=9101 SEQ=2345741730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C6B830000000001030307) Feb 23 04:21:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20190 DF PROTO=TCP SPT=54838 DPT=9105 SEQ=1109386932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C7F940000000001030307) Feb 23 04:21:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20191 DF PROTO=TCP SPT=54838 DPT=9105 SEQ=1109386932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C83830000000001030307) Feb 23 04:21:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20192 DF PROTO=TCP SPT=54838 DPT=9105 SEQ=1109386932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C8B840000000001030307) Feb 23 04:21:07 localhost kernel: SELinux: Converting 2746 SID table entries... Feb 23 04:21:07 localhost kernel: SELinux: Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped). Feb 23 04:21:07 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 04:21:07 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 04:21:07 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 04:21:07 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 04:21:07 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 04:21:07 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 04:21:07 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 04:21:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20193 DF PROTO=TCP SPT=54838 DPT=9105 SEQ=1109386932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011C9B430000000001030307) Feb 23 04:21:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50205 DF PROTO=TCP SPT=47928 DPT=9101 SEQ=1276127490 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011CA4000000000001030307) Feb 23 04:21:12 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=19 res=1 Feb 23 04:21:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:21:13 localhost systemd[1]: tmp-crun.fp8MBn.mount: Deactivated successfully. Feb 23 04:21:13 localhost podman[167535]: 2026-02-23 09:21:13.042747771 +0000 UTC m=+0.103784919 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:21:13 localhost podman[167535]: 2026-02-23 09:21:13.111863161 +0000 UTC m=+0.172900319 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS) Feb 23 04:21:13 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:21:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62580 DF PROTO=TCP SPT=43114 DPT=9882 SEQ=3816641600 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011CAF830000000001030307) Feb 23 04:21:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20194 DF PROTO=TCP SPT=54838 DPT=9105 SEQ=1109386932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011CBB840000000001030307) Feb 23 04:21:18 localhost kernel: SELinux: Converting 2749 SID table entries... Feb 23 04:21:18 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 04:21:18 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 04:21:18 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 04:21:18 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 04:21:18 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 04:21:18 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 04:21:18 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 04:21:18 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=20 res=1 Feb 23 04:21:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:21:19 localhost systemd[1]: tmp-crun.6C7vue.mount: Deactivated successfully. Feb 23 04:21:19 localhost podman[167653]: 2026-02-23 09:21:19.017403551 +0000 UTC m=+0.086417489 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 04:21:19 localhost podman[167653]: 2026-02-23 09:21:19.024028407 +0000 UTC m=+0.093042395 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 04:21:19 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:21:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30991 DF PROTO=TCP SPT=58740 DPT=9100 SEQ=3897345116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011CCA3F0000000001030307) Feb 23 04:21:21 localhost sshd[167679]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:21:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30993 DF PROTO=TCP SPT=58740 DPT=9100 SEQ=3897345116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011CD6430000000001030307) Feb 23 04:21:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50209 DF PROTO=TCP SPT=47928 DPT=9101 SEQ=1276127490 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011CDF830000000001030307) Feb 23 04:21:28 localhost kernel: SELinux: Converting 2752 SID table entries... Feb 23 04:21:28 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 04:21:28 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 04:21:28 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 04:21:28 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 04:21:28 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 04:21:28 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 04:21:28 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 04:21:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24176 DF PROTO=TCP SPT=33178 DPT=9105 SEQ=743913288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011CF4C30000000001030307) Feb 23 04:21:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24177 DF PROTO=TCP SPT=33178 DPT=9105 SEQ=743913288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011CF8C30000000001030307) Feb 23 04:21:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24178 DF PROTO=TCP SPT=33178 DPT=9105 SEQ=743913288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D00C30000000001030307) Feb 23 04:21:36 localhost kernel: SELinux: Converting 2752 SID table entries... Feb 23 04:21:36 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 04:21:36 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 04:21:36 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 04:21:36 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 04:21:36 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 04:21:36 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 04:21:36 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 04:21:37 localhost systemd[1]: Reloading. Feb 23 04:21:37 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=22 res=1 Feb 23 04:21:37 localhost systemd-rc-local-generator[167724]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:21:37 localhost systemd-sysv-generator[167730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:21:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:21:37 localhost systemd[1]: Reloading. Feb 23 04:21:37 localhost systemd-rc-local-generator[167760]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:21:37 localhost systemd-sysv-generator[167765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:21:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:21:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24179 DF PROTO=TCP SPT=33178 DPT=9105 SEQ=743913288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D10830000000001030307) Feb 23 04:21:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8030 DF PROTO=TCP SPT=39370 DPT=9101 SEQ=1407852277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D19300000000001030307) Feb 23 04:21:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:21:44 localhost podman[167779]: 2026-02-23 09:21:44.014402443 +0000 UTC m=+0.086345787 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 04:21:44 localhost podman[167779]: 2026-02-23 09:21:44.046847143 +0000 UTC m=+0.118790467 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 04:21:44 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:21:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8032 DF PROTO=TCP SPT=39370 DPT=9101 SEQ=1407852277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D25430000000001030307) Feb 23 04:21:46 localhost kernel: SELinux: Converting 2753 SID table entries... Feb 23 04:21:46 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 04:21:46 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 04:21:46 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 04:21:46 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 04:21:46 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 04:21:46 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 04:21:46 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 04:21:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24180 DF PROTO=TCP SPT=33178 DPT=9105 SEQ=743913288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D31830000000001030307) Feb 23 04:21:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:21:48.276 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:21:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:21:48.277 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:21:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:21:48.277 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:21:49 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=23 res=1 Feb 23 04:21:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:21:50 localhost podman[167855]: 2026-02-23 09:21:50.001278208 +0000 UTC m=+0.073291472 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:21:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46875 DF PROTO=TCP SPT=59468 DPT=9100 SEQ=4287378658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D3F6F0000000001030307) Feb 23 04:21:51 localhost podman[167855]: 2026-02-23 09:21:51.471853114 +0000 UTC m=+1.543866378 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Feb 23 04:21:51 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:21:52 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Feb 23 04:21:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46877 DF PROTO=TCP SPT=59468 DPT=9100 SEQ=4287378658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D4B830000000001030307) Feb 23 04:21:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8034 DF PROTO=TCP SPT=39370 DPT=9101 SEQ=1407852277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D55840000000001030307) Feb 23 04:22:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10468 DF PROTO=TCP SPT=38738 DPT=9105 SEQ=4258700727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D69F30000000001030307) Feb 23 04:22:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10469 DF PROTO=TCP SPT=38738 DPT=9105 SEQ=4258700727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D6E040000000001030307) Feb 23 04:22:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10470 DF PROTO=TCP SPT=38738 DPT=9105 SEQ=4258700727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D76030000000001030307) Feb 23 04:22:06 localhost sshd[168986]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:22:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10471 DF PROTO=TCP SPT=38738 DPT=9105 SEQ=4258700727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D85C30000000001030307) Feb 23 04:22:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56764 DF PROTO=TCP SPT=44102 DPT=9101 SEQ=4122523703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D8E5F0000000001030307) Feb 23 04:22:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17762 DF PROTO=TCP SPT=57062 DPT=9102 SEQ=187945115 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011D99830000000001030307) Feb 23 04:22:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:22:15 localhost podman[176270]: 2026-02-23 09:22:15.020784393 +0000 UTC m=+0.087171959 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0) Feb 23 04:22:15 localhost podman[176270]: 2026-02-23 09:22:15.105787842 +0000 UTC m=+0.172175448 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:22:15 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:22:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10472 DF PROTO=TCP SPT=38738 DPT=9105 SEQ=4258700727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011DA5830000000001030307) Feb 23 04:22:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26511 DF PROTO=TCP SPT=32776 DPT=9100 SEQ=1877677687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011DB49E0000000001030307) Feb 23 04:22:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:22:21 localhost podman[182385]: 2026-02-23 09:22:21.997869829 +0000 UTC m=+0.069082751 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:22:22 localhost podman[182385]: 2026-02-23 09:22:22.008886233 +0000 UTC m=+0.080099215 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2) Feb 23 04:22:22 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:22:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26513 DF PROTO=TCP SPT=32776 DPT=9100 SEQ=1877677687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011DC0C30000000001030307) Feb 23 04:22:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56768 DF PROTO=TCP SPT=44102 DPT=9101 SEQ=4122523703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011DCB830000000001030307) Feb 23 04:22:30 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 23 04:22:30 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 23 04:22:30 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 23 04:22:30 localhost systemd[1]: sshd.service: Consumed 1.728s CPU time, read 32.0K from disk, written 0B to disk. Feb 23 04:22:30 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 23 04:22:30 localhost systemd[1]: Stopping sshd-keygen.target... Feb 23 04:22:30 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 04:22:30 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 04:22:30 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 23 04:22:30 localhost systemd[1]: Reached target sshd-keygen.target. Feb 23 04:22:30 localhost systemd[1]: Starting OpenSSH server daemon... Feb 23 04:22:30 localhost sshd[185797]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:22:30 localhost systemd[1]: Started OpenSSH server daemon. Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:30 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:31 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:31 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:31 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:31 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:31 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:31 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:31 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:31 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10707 DF PROTO=TCP SPT=46250 DPT=9105 SEQ=1009254583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011DDF230000000001030307) Feb 23 04:22:32 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 04:22:32 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 04:22:32 localhost systemd[1]: Reloading. Feb 23 04:22:32 localhost systemd-rc-local-generator[186025]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:22:32 localhost systemd-sysv-generator[186028]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:32 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 04:22:32 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 04:22:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10708 DF PROTO=TCP SPT=46250 DPT=9105 SEQ=1009254583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011DE3440000000001030307) Feb 23 04:22:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10709 DF PROTO=TCP SPT=46250 DPT=9105 SEQ=1009254583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011DEB430000000001030307) Feb 23 04:22:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10710 DF PROTO=TCP SPT=46250 DPT=9105 SEQ=1009254583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011DFB030000000001030307) Feb 23 04:22:40 localhost python3.9[193624]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 23 04:22:41 localhost systemd[1]: Reloading. Feb 23 04:22:41 localhost systemd-sysv-generator[193844]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:22:41 localhost systemd-rc-local-generator[193837]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:22:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:22:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51622 DF PROTO=TCP SPT=36206 DPT=9101 SEQ=1017230799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E03900000000001030307) Feb 23 04:22:41 localhost python3.9[194229]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 23 04:22:42 localhost systemd[1]: Reloading. Feb 23 04:22:42 localhost systemd-rc-local-generator[194466]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:22:42 localhost systemd-sysv-generator[194469]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:43 localhost python3.9[195020]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 23 04:22:43 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 04:22:43 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 04:22:43 localhost systemd[1]: man-db-cache-update.service: Consumed 13.871s CPU time. Feb 23 04:22:43 localhost systemd[1]: run-rba40f14c4a7046b28a57c8d86c4a6572.service: Deactivated successfully. Feb 23 04:22:43 localhost systemd[1]: run-re27f7b21e6284cd5be1254732804bce3.service: Deactivated successfully. Feb 23 04:22:44 localhost systemd[1]: Reloading. Feb 23 04:22:44 localhost systemd-rc-local-generator[195184]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:22:44 localhost systemd-sysv-generator[195187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:22:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15810 DF PROTO=TCP SPT=52428 DPT=9882 SEQ=1766682852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E0F830000000001030307) Feb 23 04:22:44 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:22:44 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:44 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:44 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:44 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:44 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:44 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:44 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:45 localhost python3.9[195306]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 23 04:22:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:22:45 localhost systemd[1]: Reloading. Feb 23 04:22:45 localhost podman[195308]: 2026-02-23 09:22:45.411534703 +0000 UTC m=+0.116184171 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible) Feb 23 04:22:45 localhost systemd-rc-local-generator[195350]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:22:45 localhost systemd-sysv-generator[195355]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:22:45 localhost podman[195308]: 2026-02-23 09:22:45.453842181 +0000 UTC m=+0.158491679 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:22:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:22:45 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:45 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:45 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:45 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:45 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:22:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10711 DF PROTO=TCP SPT=46250 DPT=9105 SEQ=1009254583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E1B830000000001030307) Feb 23 04:22:47 localhost python3.9[195479]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:22:47 localhost systemd[1]: Reloading. Feb 23 04:22:47 localhost systemd-sysv-generator[195510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:22:47 localhost systemd-rc-local-generator[195505]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:22:47 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:22:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:47 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:47 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:47 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:22:48.277 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:22:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:22:48.278 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:22:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:22:48.278 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:22:48 localhost python3.9[195628]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:22:49 localhost systemd[1]: Reloading. Feb 23 04:22:49 localhost systemd-sysv-generator[195660]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:22:49 localhost systemd-rc-local-generator[195655]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:22:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:22:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:50 localhost python3.9[195777]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:22:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=75 DF PROTO=TCP SPT=43154 DPT=9100 SEQ=1911829756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E29CF0000000001030307) Feb 23 04:22:51 localhost sshd[195780]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:22:51 localhost systemd[1]: Reloading. Feb 23 04:22:51 localhost systemd-rc-local-generator[195809]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:22:51 localhost systemd-sysv-generator[195812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:22:52 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:52 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:52 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:22:52 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:52 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:52 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:52 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:52 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:22:52 localhost podman[195818]: 2026-02-23 09:22:52.268729619 +0000 UTC m=+0.067595669 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:22:52 localhost podman[195818]: 2026-02-23 09:22:52.302006968 +0000 UTC m=+0.100872988 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS) Feb 23 04:22:52 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:22:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26516 DF PROTO=TCP SPT=32776 DPT=9100 SEQ=1877677687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E31830000000001030307) Feb 23 04:22:53 localhost python3.9[195946]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:22:54 localhost python3.9[196059]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:22:54 localhost systemd[1]: Reloading. Feb 23 04:22:54 localhost systemd-rc-local-generator[196084]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:22:54 localhost systemd-sysv-generator[196087]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51626 DF PROTO=TCP SPT=36206 DPT=9101 SEQ=1017230799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E3F830000000001030307) Feb 23 04:22:56 localhost python3.9[196208]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 23 04:22:56 localhost systemd[1]: Reloading. Feb 23 04:22:56 localhost systemd-rc-local-generator[196232]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:22:56 localhost systemd-sysv-generator[196240]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:22:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:22:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:22:58 localhost python3.9[196356]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:22:58 localhost python3.9[196469]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:22:59 localhost python3.9[196582]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:00 localhost python3.9[196695]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:01 localhost python3.9[196808]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14447 DF PROTO=TCP SPT=47232 DPT=9105 SEQ=3595600769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E54530000000001030307) Feb 23 04:23:01 localhost python3.9[196921]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:02 localhost sshd[196986]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:23:02 localhost python3.9[197036]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14448 DF PROTO=TCP SPT=47232 DPT=9105 SEQ=3595600769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E58430000000001030307) Feb 23 04:23:04 localhost python3.9[197149]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14449 DF PROTO=TCP SPT=47232 DPT=9105 SEQ=3595600769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E60430000000001030307) Feb 23 04:23:05 localhost python3.9[197262]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:06 localhost python3.9[197375]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:07 localhost python3.9[197488]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14450 DF PROTO=TCP SPT=47232 DPT=9105 SEQ=3595600769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E70030000000001030307) Feb 23 04:23:09 localhost python3.9[197601]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:10 localhost python3.9[197714]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33667 DF PROTO=TCP SPT=57438 DPT=9101 SEQ=594309609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E78C10000000001030307) Feb 23 04:23:11 localhost python3.9[197827]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 23 04:23:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30425 DF PROTO=TCP SPT=35310 DPT=9102 SEQ=3263412845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E83830000000001030307) Feb 23 04:23:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:23:16 localhost podman[197848]: 2026-02-23 09:23:16.021897462 +0000 UTC m=+0.091795558 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:23:16 localhost podman[197848]: 2026-02-23 09:23:16.126414961 +0000 UTC m=+0.196313017 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:23:16 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:23:17 localhost python3.9[197967]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:23:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14451 DF PROTO=TCP SPT=47232 DPT=9105 SEQ=3595600769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E8F830000000001030307) Feb 23 04:23:17 localhost python3.9[198077]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:23:18 localhost python3.9[198187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:23:19 localhost python3.9[198297]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:23:19 localhost python3.9[198444]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:23:20 localhost python3.9[198584]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:23:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21222 DF PROTO=TCP SPT=60744 DPT=9100 SEQ=1809844320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011E9EFF0000000001030307) Feb 23 04:23:21 localhost python3.9[198710]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:23:22 localhost python3.9[198820]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:23:22 localhost podman[198911]: 2026-02-23 09:23:22.884221306 +0000 UTC m=+0.084211013 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent) Feb 23 04:23:22 localhost podman[198911]: 2026-02-23 09:23:22.893748331 +0000 UTC m=+0.093738078 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Feb 23 04:23:22 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:23:22 localhost python3.9[198910]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771838601.549661-1663-168759477034018/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:23 localhost python3.9[199037]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21224 DF PROTO=TCP SPT=60744 DPT=9100 SEQ=1809844320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011EAB1F0000000001030307) Feb 23 04:23:24 localhost python3.9[199127]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771838603.1725247-1663-165914949341790/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:24 localhost python3.9[199237]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:25 localhost python3.9[199327]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771838604.4910445-1663-24944904330913/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:26 localhost python3.9[199437]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33671 DF PROTO=TCP SPT=57438 DPT=9101 SEQ=594309609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011EB5840000000001030307) Feb 23 04:23:26 localhost python3.9[199527]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771838605.8356664-1663-3984684894724/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:27 localhost python3.9[199637]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:28 localhost python3.9[199727]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771838607.000613-1663-111249852070494/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:29 localhost python3.9[199837]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:29 localhost python3.9[199927]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771838608.6352744-1663-145147374772432/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:30 localhost python3.9[200037]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:31 localhost python3.9[200125]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771838609.8896735-1663-127396453877392/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14420 DF PROTO=TCP SPT=55298 DPT=9105 SEQ=1960625703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011EC9840000000001030307) Feb 23 04:23:32 localhost python3.9[200235]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:32 localhost python3.9[200325]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771838611.6108522-1663-125967160105899/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14421 DF PROTO=TCP SPT=55298 DPT=9105 SEQ=1960625703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011ECD840000000001030307) Feb 23 04:23:33 localhost python3.9[200435]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:33 localhost python3.9[200545]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:34 localhost python3.9[200655]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14422 DF PROTO=TCP SPT=55298 DPT=9105 SEQ=1960625703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011ED5840000000001030307) Feb 23 04:23:35 localhost python3.9[200765]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:35 localhost python3.9[200875]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:36 localhost python3.9[200985]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:36 localhost python3.9[201095]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:37 localhost python3.9[201205]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:38 localhost python3.9[201315]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:38 localhost sshd[201426]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:23:38 localhost python3.9[201425]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14423 DF PROTO=TCP SPT=55298 DPT=9105 SEQ=1960625703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011EE5430000000001030307) Feb 23 04:23:39 localhost python3.9[201537]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:40 localhost python3.9[201647]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:41 localhost python3.9[201757]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56108 DF PROTO=TCP SPT=36282 DPT=9101 SEQ=318743145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011EEDEF0000000001030307) Feb 23 04:23:41 localhost python3.9[201867]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:43 localhost python3.9[201977]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:43 localhost python3.9[202087]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4906 DF PROTO=TCP SPT=58768 DPT=9882 SEQ=7923786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011EF9830000000001030307) Feb 23 04:23:44 localhost python3.9[202175]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838623.4826236-2326-83553870765323/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:45 localhost python3.9[202285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:45 localhost python3.9[202373]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838624.6943798-2326-131992687054587/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:46 localhost python3.9[202483]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:23:46 localhost podman[202572]: 2026-02-23 09:23:46.733760532 +0000 UTC m=+0.086385440 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:23:46 localhost systemd[1]: tmp-crun.BBsbXK.mount: Deactivated successfully. Feb 23 04:23:46 localhost podman[202572]: 2026-02-23 09:23:46.800814974 +0000 UTC m=+0.153439852 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible) Feb 23 04:23:46 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:23:46 localhost python3.9[202571]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838625.8394089-2326-182198954189399/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14424 DF PROTO=TCP SPT=55298 DPT=9105 SEQ=1960625703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F05830000000001030307) Feb 23 04:23:47 localhost python3.9[202706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:47 localhost python3.9[202794]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838626.9858358-2326-29949928797472/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:23:48.277 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:23:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:23:48.278 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:23:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:23:48.278 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:23:48 localhost python3.9[202904]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:49 localhost python3.9[202992]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838628.133264-2326-233776572616932/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:49 localhost python3.9[203102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:50 localhost python3.9[203190]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838629.2726297-2326-164764923218590/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1275 DF PROTO=TCP SPT=54086 DPT=9100 SEQ=3086184820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F142F0000000001030307) Feb 23 04:23:51 localhost python3.9[203300]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:51 localhost python3.9[203388]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838630.8271503-2326-242651951496787/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:52 localhost python3.9[203498]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:23:53 localhost podman[203587]: 2026-02-23 09:23:53.345169313 +0000 UTC m=+0.085449333 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Feb 23 04:23:53 localhost podman[203587]: 2026-02-23 09:23:53.353919524 +0000 UTC m=+0.094199524 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:23:53 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:23:53 localhost python3.9[203586]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838631.949279-2326-75753392576515/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:54 localhost python3.9[203714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1277 DF PROTO=TCP SPT=54086 DPT=9100 SEQ=3086184820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F20440000000001030307) Feb 23 04:23:55 localhost python3.9[203802]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838633.5774891-2326-268871403730471/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:55 localhost python3.9[203912]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:56 localhost python3.9[204000]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838635.4067013-2326-66079222796107/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56112 DF PROTO=TCP SPT=36282 DPT=9101 SEQ=318743145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F29830000000001030307) Feb 23 04:23:57 localhost python3.9[204110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:57 localhost python3.9[204198]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838636.5947466-2326-223592601346307/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:58 localhost python3.9[204308]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:23:58 localhost python3.9[204396]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838637.7815337-2326-213089192386928/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:23:59 localhost python3.9[204506]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:00 localhost python3.9[204594]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838638.9735928-2326-69015466141987/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:00 localhost python3.9[204704]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:01 localhost python3.9[204792]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838640.137077-2326-263645765358949/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46748 DF PROTO=TCP SPT=44486 DPT=9105 SEQ=1066345280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F3EB40000000001030307) Feb 23 04:24:02 localhost python3.9[204900]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:24:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46749 DF PROTO=TCP SPT=44486 DPT=9105 SEQ=1066345280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F42C30000000001030307) Feb 23 04:24:03 localhost python3.9[205013]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Feb 23 04:24:04 localhost python3.9[205123]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:24:04 localhost systemd[1]: Reloading. Feb 23 04:24:04 localhost systemd-rc-local-generator[205147]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:24:04 localhost systemd-sysv-generator[205151]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:24:04 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:04 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:04 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:04 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:24:04 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:04 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:04 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:04 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:04 localhost systemd[1]: Starting libvirt logging daemon socket... Feb 23 04:24:04 localhost systemd[1]: Listening on libvirt logging daemon socket. Feb 23 04:24:04 localhost systemd[1]: Starting libvirt logging daemon admin socket... Feb 23 04:24:04 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Feb 23 04:24:04 localhost systemd[1]: Starting libvirt logging daemon... Feb 23 04:24:04 localhost systemd[1]: Started libvirt logging daemon. Feb 23 04:24:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46750 DF PROTO=TCP SPT=44486 DPT=9105 SEQ=1066345280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F4AC30000000001030307) Feb 23 04:24:06 localhost python3.9[205275]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:24:06 localhost systemd[1]: Reloading. Feb 23 04:24:06 localhost systemd-rc-local-generator[205300]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:24:06 localhost systemd-sysv-generator[205305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:06 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Feb 23 04:24:06 localhost systemd[1]: Starting libvirt nodedev daemon socket... Feb 23 04:24:06 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Feb 23 04:24:06 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Feb 23 04:24:06 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Feb 23 04:24:06 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Feb 23 04:24:06 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Feb 23 04:24:06 localhost systemd[1]: Started libvirt nodedev daemon. Feb 23 04:24:06 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Feb 23 04:24:06 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Feb 23 04:24:06 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Feb 23 04:24:07 localhost python3.9[205451]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:24:07 localhost systemd[1]: Reloading. Feb 23 04:24:07 localhost systemd-sysv-generator[205480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:24:07 localhost systemd-rc-local-generator[205475]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:07 localhost systemd[1]: Starting libvirt proxy daemon socket... Feb 23 04:24:07 localhost systemd[1]: Listening on libvirt proxy daemon socket. Feb 23 04:24:07 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Feb 23 04:24:07 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Feb 23 04:24:07 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Feb 23 04:24:07 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Feb 23 04:24:07 localhost systemd[1]: Started libvirt proxy daemon. Feb 23 04:24:07 localhost setroubleshoot[205313]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 1bd9db2d-a111-4110-960a-e0d162472f16 Feb 23 04:24:07 localhost setroubleshoot[205313]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Feb 23 04:24:07 localhost setroubleshoot[205313]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 1bd9db2d-a111-4110-960a-e0d162472f16 Feb 23 04:24:07 localhost setroubleshoot[205313]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Feb 23 04:24:08 localhost python3.9[205632]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:24:08 localhost systemd[1]: Reloading. Feb 23 04:24:08 localhost systemd-rc-local-generator[205655]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:24:08 localhost systemd-sysv-generator[205658]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:24:08 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:08 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:08 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:08 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:24:08 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:08 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:08 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:08 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:08 localhost systemd[1]: Listening on libvirt locking daemon socket. Feb 23 04:24:08 localhost systemd[1]: Starting libvirt QEMU daemon socket... Feb 23 04:24:08 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 23 04:24:08 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Feb 23 04:24:08 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Feb 23 04:24:08 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Feb 23 04:24:08 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Feb 23 04:24:08 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Feb 23 04:24:08 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Feb 23 04:24:08 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Feb 23 04:24:08 localhost systemd[1]: Started libvirt QEMU daemon. Feb 23 04:24:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46751 DF PROTO=TCP SPT=44486 DPT=9105 SEQ=1066345280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F5A830000000001030307) Feb 23 04:24:09 localhost python3.9[205806]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:24:09 localhost systemd[1]: Reloading. Feb 23 04:24:09 localhost systemd-sysv-generator[205836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:24:09 localhost systemd-rc-local-generator[205832]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:24:10 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:10 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:10 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:10 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:24:10 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:10 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:10 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:10 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:10 localhost systemd[1]: Starting libvirt secret daemon socket... Feb 23 04:24:10 localhost systemd[1]: Listening on libvirt secret daemon socket. Feb 23 04:24:10 localhost systemd[1]: Starting libvirt secret daemon admin socket... Feb 23 04:24:10 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Feb 23 04:24:10 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Feb 23 04:24:10 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Feb 23 04:24:10 localhost systemd[1]: Started libvirt secret daemon. Feb 23 04:24:11 localhost python3.9[205977]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52132 DF PROTO=TCP SPT=34566 DPT=9101 SEQ=3532590354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F63200000000001030307) Feb 23 04:24:11 localhost python3.9[206087]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 23 04:24:12 localhost python3.9[206197]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:24:13 localhost python3.9[206309]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 23 04:24:14 localhost python3.9[206417]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52134 DF PROTO=TCP SPT=34566 DPT=9101 SEQ=3532590354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F6F430000000001030307) Feb 23 04:24:14 localhost python3.9[206503]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838653.6305535-3190-18341204862899/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9110e86c46036bf6b9c9b3a9e049196c9a537971 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:15 localhost python3.9[206613]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine f1fea371-cb69-578d-a3d0-b5c472a84b46#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:24:16 localhost python3.9[206733]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:24:17 localhost podman[206862]: 2026-02-23 09:24:17.031023325 +0000 UTC m=+0.102472121 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:24:17 localhost podman[206862]: 2026-02-23 09:24:17.089926794 +0000 UTC m=+0.161375630 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216) Feb 23 04:24:17 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:24:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46752 DF PROTO=TCP SPT=44486 DPT=9105 SEQ=1066345280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F7B840000000001030307) Feb 23 04:24:18 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Feb 23 04:24:18 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Feb 23 04:24:19 localhost python3.9[207093]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:19 localhost python3.9[207203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:20 localhost python3.9[207291]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838659.4822495-3355-97385159053208/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60620 DF PROTO=TCP SPT=60674 DPT=9100 SEQ=1075409691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F895F0000000001030307) Feb 23 04:24:21 localhost python3.9[207483]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:21 localhost podman[207509]: 2026-02-23 09:24:21.807698672 +0000 UTC m=+0.082842235 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.42.2, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, version=7) Feb 23 04:24:21 localhost podman[207509]: 2026-02-23 09:24:21.92585488 +0000 UTC m=+0.200998453 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.42.2, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:24:22 localhost python3.9[207700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:23 localhost python3.9[207789]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:24:23 localhost podman[207917]: 2026-02-23 09:24:23.650547128 +0000 UTC m=+0.097015028 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:24:23 localhost podman[207917]: 2026-02-23 09:24:23.687995216 +0000 UTC m=+0.134463106 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:24:23 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:24:23 localhost python3.9[207916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60622 DF PROTO=TCP SPT=60674 DPT=9100 SEQ=1075409691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F95830000000001030307) Feb 23 04:24:24 localhost python3.9[208009]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.1kaar1hc recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:24 localhost python3.9[208119]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:25 localhost python3.9[208176]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:25 localhost sshd[208287]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:24:26 localhost python3.9[208286]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:24:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52136 DF PROTO=TCP SPT=34566 DPT=9101 SEQ=3532590354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011F9F830000000001030307) Feb 23 04:24:26 localhost python3[208399]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 23 04:24:27 localhost python3.9[208509]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:28 localhost python3.9[208566]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:28 localhost python3.9[208676]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:29 localhost python3.9[208766]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838668.3467298-3622-161694258237160/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:30 localhost python3.9[208876]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:30 localhost python3.9[208933]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:31 localhost python3.9[209043]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:31 localhost python3.9[209100]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46826 DF PROTO=TCP SPT=42286 DPT=9105 SEQ=609236790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011FB3E30000000001030307) Feb 23 04:24:32 localhost python3.9[209210]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46827 DF PROTO=TCP SPT=42286 DPT=9105 SEQ=609236790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011FB8030000000001030307) Feb 23 04:24:33 localhost python3.9[209300]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771838672.0654511-3739-155367227552251/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:34 localhost python3.9[209410]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:34 localhost python3.9[209520]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:24:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46828 DF PROTO=TCP SPT=42286 DPT=9105 SEQ=609236790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011FC0030000000001030307) Feb 23 04:24:35 localhost python3.9[209633]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:36 localhost python3.9[209743]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:24:37 localhost python3.9[209854]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:24:37 localhost python3.9[209966]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:24:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46829 DF PROTO=TCP SPT=42286 DPT=9105 SEQ=609236790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011FCFC30000000001030307) Feb 23 04:24:39 localhost python3.9[210079]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:39 localhost python3.9[210189]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:40 localhost python3.9[210277]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838679.3881469-3955-264437320945867/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28743 DF PROTO=TCP SPT=38542 DPT=9101 SEQ=3925064893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011FD8500000000001030307) Feb 23 04:24:41 localhost python3.9[210387]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:42 localhost python3.9[210475]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838681.0399196-4000-80105921023832/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:42 localhost python3.9[210585]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:24:43 localhost python3.9[210673]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838682.2348146-4045-77802595680574/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:24:44 localhost python3.9[210783]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:24:44 localhost systemd[1]: Reloading. Feb 23 04:24:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6168 DF PROTO=TCP SPT=49628 DPT=9102 SEQ=2674615492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011FE3830000000001030307) Feb 23 04:24:44 localhost systemd-sysv-generator[210811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:24:44 localhost systemd-rc-local-generator[210807]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:24:44 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:44 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:44 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:44 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:24:44 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:44 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:44 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:44 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:44 localhost systemd[1]: Reached target edpm_libvirt.target. Feb 23 04:24:45 localhost python3.9[210932]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 23 04:24:45 localhost systemd[1]: Reloading. Feb 23 04:24:45 localhost systemd-rc-local-generator[210957]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:24:45 localhost systemd-sysv-generator[210960]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: Reloading. Feb 23 04:24:45 localhost systemd-rc-local-generator[210993]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:24:45 localhost systemd-sysv-generator[210996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:45 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:24:46 localhost systemd[1]: session-52.scope: Deactivated successfully. Feb 23 04:24:46 localhost systemd[1]: session-52.scope: Consumed 3min 20.622s CPU time. Feb 23 04:24:46 localhost systemd-logind[759]: Session 52 logged out. Waiting for processes to exit. Feb 23 04:24:46 localhost systemd-logind[759]: Removed session 52. Feb 23 04:24:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46830 DF PROTO=TCP SPT=42286 DPT=9105 SEQ=609236790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011FEF830000000001030307) Feb 23 04:24:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:24:48 localhost podman[211024]: 2026-02-23 09:24:48.049409329 +0000 UTC m=+0.120837638 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 23 04:24:48 localhost podman[211024]: 2026-02-23 09:24:48.08961469 +0000 UTC m=+0.161042939 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:24:48 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:24:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:24:48.278 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:24:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:24:48.278 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:24:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:24:48.278 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:24:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46426 DF PROTO=TCP SPT=54780 DPT=9100 SEQ=4237953337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A011FFE8F0000000001030307) Feb 23 04:24:52 localhost sshd[211049]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:24:52 localhost systemd-logind[759]: New session 53 of user zuul. Feb 23 04:24:52 localhost systemd[1]: Started Session 53 of User zuul. Feb 23 04:24:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:24:54 localhost podman[211124]: 2026-02-23 09:24:54.012298762 +0000 UTC m=+0.087135549 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:24:54 localhost podman[211124]: 2026-02-23 09:24:54.043719201 +0000 UTC m=+0.118555978 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:24:54 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:24:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46428 DF PROTO=TCP SPT=54780 DPT=9100 SEQ=4237953337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01200A830000000001030307) Feb 23 04:24:54 localhost python3.9[211178]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:24:55 localhost python3.9[211290]: ansible-ansible.builtin.service_facts Invoked Feb 23 04:24:55 localhost network[211307]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 04:24:55 localhost network[211308]: 'network-scripts' will be removed from distribution in near future. Feb 23 04:24:55 localhost network[211309]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 04:24:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28747 DF PROTO=TCP SPT=38542 DPT=9101 SEQ=3925064893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012013830000000001030307) Feb 23 04:24:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:25:00 localhost python3.9[211541]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:25:01 localhost python3.9[211604]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:25:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62677 DF PROTO=TCP SPT=52350 DPT=9105 SEQ=98230973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012029130000000001030307) Feb 23 04:25:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62678 DF PROTO=TCP SPT=52350 DPT=9105 SEQ=98230973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01202D030000000001030307) Feb 23 04:25:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62679 DF PROTO=TCP SPT=52350 DPT=9105 SEQ=98230973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012035040000000001030307) Feb 23 04:25:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62680 DF PROTO=TCP SPT=52350 DPT=9105 SEQ=98230973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012044C30000000001030307) Feb 23 04:25:09 localhost python3.9[211716]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:25:11 localhost python3.9[211828]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59074 DF PROTO=TCP SPT=40938 DPT=9101 SEQ=395573508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01204D800000000001030307) Feb 23 04:25:11 localhost python3.9[211938]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:25:12 localhost python3.9[212049]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:25:13 localhost python3.9[212160]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:25:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17737 DF PROTO=TCP SPT=57112 DPT=9882 SEQ=1391261456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012059830000000001030307) Feb 23 04:25:14 localhost python3.9[212271]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:25:14 localhost sshd[212291]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:25:15 localhost python3.9[212385]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:17 localhost python3.9[212495]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:25:17 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket. Feb 23 04:25:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62681 DF PROTO=TCP SPT=52350 DPT=9105 SEQ=98230973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012065830000000001030307) Feb 23 04:25:18 localhost python3.9[212609]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:25:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:25:18 localhost systemd[1]: Reloading. Feb 23 04:25:18 localhost systemd-rc-local-generator[212643]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:25:18 localhost systemd-sysv-generator[212648]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:25:18 localhost podman[212613]: 2026-02-23 09:25:18.24155624 +0000 UTC m=+0.110067126 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Feb 23 04:25:18 localhost podman[212613]: 2026-02-23 09:25:18.273091912 +0000 UTC m=+0.141602788 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:25:18 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:18 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:18 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:25:18 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:18 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:18 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:18 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:18 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:25:18 localhost systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi). Feb 23 04:25:18 localhost systemd[1]: Starting Open-iSCSI... Feb 23 04:25:18 localhost iscsid[212673]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 23 04:25:18 localhost iscsid[212673]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 23 04:25:18 localhost iscsid[212673]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 23 04:25:18 localhost iscsid[212673]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 23 04:25:18 localhost iscsid[212673]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 23 04:25:18 localhost iscsid[212673]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 23 04:25:18 localhost iscsid[212673]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf Feb 23 04:25:18 localhost systemd[1]: Started Open-iSCSI. Feb 23 04:25:18 localhost systemd[1]: Starting Logout off all iSCSI sessions on shutdown... Feb 23 04:25:18 localhost systemd[1]: Finished Logout off all iSCSI sessions on shutdown. Feb 23 04:25:19 localhost python3.9[212782]: ansible-ansible.builtin.service_facts Invoked Feb 23 04:25:19 localhost network[212799]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 04:25:19 localhost network[212800]: 'network-scripts' will be removed from distribution in near future. Feb 23 04:25:19 localhost network[212801]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 04:25:20 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Feb 23 04:25:20 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Feb 23 04:25:20 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service. Feb 23 04:25:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26255 DF PROTO=TCP SPT=57706 DPT=9100 SEQ=493663267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012073BF0000000001030307) Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l d6442277-e896-433a-88a3-3e7c4f780eef Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l d6442277-e896-433a-88a3-3e7c4f780eef Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l d6442277-e896-433a-88a3-3e7c4f780eef Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l d6442277-e896-433a-88a3-3e7c4f780eef Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l d6442277-e896-433a-88a3-3e7c4f780eef Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l d6442277-e896-433a-88a3-3e7c4f780eef Feb 23 04:25:21 localhost setroubleshoot[212809]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 23 04:25:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:25:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46431 DF PROTO=TCP SPT=54780 DPT=9100 SEQ=4237953337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01207B830000000001030307) Feb 23 04:25:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:25:24 localhost podman[212967]: 2026-02-23 09:25:24.158578921 +0000 UTC m=+0.072798447 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:25:24 localhost podman[212967]: 2026-02-23 09:25:24.190543517 +0000 UTC m=+0.104763023 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:25:24 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:25:25 localhost python3.9[213137]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:25:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59078 DF PROTO=TCP SPT=40938 DPT=9101 SEQ=395573508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012089830000000001030307) Feb 23 04:25:29 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 04:25:29 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 04:25:29 localhost systemd[1]: Reloading. Feb 23 04:25:29 localhost systemd-rc-local-generator[213193]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:25:29 localhost systemd-sysv-generator[213198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:25:29 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:29 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:29 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:29 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:25:29 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:29 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:29 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:29 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:29 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 04:25:29 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 04:25:29 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 04:25:29 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 04:25:29 localhost systemd[1]: run-rd984a38ffbc240089a017790af347029.service: Deactivated successfully. Feb 23 04:25:29 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 04:25:29 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 04:25:29 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 04:25:29 localhost systemd[1]: run-rb87acc308bbb4134ab4a27d51a4df5dc.service: Deactivated successfully. Feb 23 04:25:31 localhost python3.9[213450]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 23 04:25:31 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. Feb 23 04:25:31 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Feb 23 04:25:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23124 DF PROTO=TCP SPT=52128 DPT=9105 SEQ=536702118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01209E430000000001030307) Feb 23 04:25:32 localhost python3.9[213561]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Feb 23 04:25:32 localhost python3.9[213675]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:25:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23125 DF PROTO=TCP SPT=52128 DPT=9105 SEQ=536702118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0120A2430000000001030307) Feb 23 04:25:33 localhost python3.9[213763]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838732.441013-484-81568653953872/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:34 localhost python3.9[213873]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23126 DF PROTO=TCP SPT=52128 DPT=9105 SEQ=536702118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0120AA430000000001030307) Feb 23 04:25:35 localhost python3.9[213983]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:25:35 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 23 04:25:35 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 04:25:35 localhost systemd[1]: Stopping Load Kernel Modules... Feb 23 04:25:35 localhost systemd[1]: Starting Load Kernel Modules... Feb 23 04:25:35 localhost systemd-modules-load[213987]: Module 'msr' is built in Feb 23 04:25:35 localhost systemd[1]: Finished Load Kernel Modules. Feb 23 04:25:36 localhost python3.9[214097]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:25:36 localhost python3.9[214208]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:25:37 localhost python3.9[214318]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:25:37 localhost sshd[214319]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:25:38 localhost python3.9[214408]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838737.1196141-637-139410668885857/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23127 DF PROTO=TCP SPT=52128 DPT=9105 SEQ=536702118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0120BA040000000001030307) Feb 23 04:25:39 localhost python3.9[214518]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:25:40 localhost python3.9[214629]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1028 DF PROTO=TCP SPT=60532 DPT=9101 SEQ=1996521061 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0120C2B00000000001030307) Feb 23 04:25:41 localhost python3.9[214739]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:42 localhost python3.9[214849]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:43 localhost python3.9[214959]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:43 localhost python3.9[215069]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28940 DF PROTO=TCP SPT=33520 DPT=9102 SEQ=3738057708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0120CD830000000001030307) Feb 23 04:25:44 localhost python3.9[215179]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:45 localhost python3.9[215289]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:45 localhost python3.9[215399]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:25:46 localhost python3.9[215511]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:25:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23128 DF PROTO=TCP SPT=52128 DPT=9105 SEQ=536702118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0120D9830000000001030307) Feb 23 04:25:47 localhost python3.9[215622]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:25:47 localhost systemd[1]: Listening on multipathd control socket. Feb 23 04:25:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:25:48.279 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:25:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:25:48.280 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:25:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:25:48.280 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:25:48 localhost python3.9[215736]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:25:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:25:48 localhost systemd[1]: Starting Wait for udev To Complete Device Initialization... Feb 23 04:25:48 localhost udevadm[215742]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in. Feb 23 04:25:48 localhost systemd[1]: Finished Wait for udev To Complete Device Initialization. Feb 23 04:25:48 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Feb 23 04:25:48 localhost multipathd[215756]: --------start up-------- Feb 23 04:25:48 localhost multipathd[215756]: read /etc/multipath.conf Feb 23 04:25:48 localhost multipathd[215756]: path checkers start up Feb 23 04:25:48 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Feb 23 04:25:48 localhost podman[215741]: 2026-02-23 09:25:48.560733342 +0000 UTC m=+0.092079782 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true, container_name=ovn_controller) Feb 23 04:25:48 localhost podman[215741]: 2026-02-23 09:25:48.626862281 +0000 UTC m=+0.158208651 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 04:25:48 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:25:49 localhost python3.9[215888]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 23 04:25:50 localhost python3.9[215998]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Feb 23 04:25:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2350 DF PROTO=TCP SPT=46966 DPT=9100 SEQ=3005293478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0120E8EF0000000001030307) Feb 23 04:25:51 localhost python3.9[216116]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:25:51 localhost python3.9[216204]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838750.5356848-1027-272850080000218/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:53 localhost python3.9[216314]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:25:53 localhost python3.9[216424]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:25:54 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 23 04:25:54 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 04:25:54 localhost systemd[1]: Stopping Load Kernel Modules... Feb 23 04:25:54 localhost systemd[1]: Starting Load Kernel Modules... Feb 23 04:25:54 localhost systemd-modules-load[216428]: Module 'msr' is built in Feb 23 04:25:54 localhost systemd[1]: Finished Load Kernel Modules. Feb 23 04:25:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2352 DF PROTO=TCP SPT=46966 DPT=9100 SEQ=3005293478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0120F5040000000001030307) Feb 23 04:25:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:25:55 localhost systemd[1]: tmp-crun.OsALAM.mount: Deactivated successfully. Feb 23 04:25:55 localhost podman[216484]: 2026-02-23 09:25:55.014498395 +0000 UTC m=+0.087275268 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Feb 23 04:25:55 localhost podman[216484]: 2026-02-23 09:25:55.027984601 +0000 UTC m=+0.100761504 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent) Feb 23 04:25:55 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:25:55 localhost python3.9[216555]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:25:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1032 DF PROTO=TCP SPT=60532 DPT=9101 SEQ=1996521061 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0120FF830000000001030307) Feb 23 04:25:59 localhost systemd[1]: Reloading. Feb 23 04:25:59 localhost systemd-sysv-generator[216594]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:25:59 localhost systemd-rc-local-generator[216588]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: Reloading. Feb 23 04:25:59 localhost systemd-sysv-generator[216627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:25:59 localhost systemd-rc-local-generator[216624]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:25:59 localhost journal[205676]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, ) Feb 23 04:25:59 localhost journal[205676]: hostname: np0005626465.localdomain Feb 23 04:25:59 localhost journal[205676]: nl_recv returned with error: No buffer space available Feb 23 04:25:59 localhost systemd-logind[759]: Watching system buttons on /dev/input/event0 (Power Button) Feb 23 04:25:59 localhost systemd-logind[759]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 23 04:25:59 localhost lvm[216676]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 23 04:26:00 localhost lvm[216677]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 23 04:26:00 localhost lvm[216676]: VG ceph_vg1 finished Feb 23 04:26:00 localhost lvm[216677]: VG ceph_vg0 finished Feb 23 04:26:00 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 23 04:26:00 localhost systemd[1]: Starting man-db-cache-update.service... Feb 23 04:26:00 localhost systemd[1]: Reloading. Feb 23 04:26:00 localhost systemd-sysv-generator[216730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:26:00 localhost systemd-rc-local-generator[216727]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:00 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 23 04:26:01 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 23 04:26:01 localhost systemd[1]: Finished man-db-cache-update.service. Feb 23 04:26:01 localhost systemd[1]: man-db-cache-update.service: Consumed 1.153s CPU time. Feb 23 04:26:01 localhost systemd[1]: run-r1754c087ccc44c6ca274f7e6e44b0ed4.service: Deactivated successfully. Feb 23 04:26:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61433 DF PROTO=TCP SPT=46978 DPT=9105 SEQ=962679652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012113740000000001030307) Feb 23 04:26:02 localhost sshd[217945]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:26:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61434 DF PROTO=TCP SPT=46978 DPT=9105 SEQ=962679652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012117830000000001030307) Feb 23 04:26:03 localhost python3.9[217987]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:26:03 localhost multipathd[215756]: exit (signal) Feb 23 04:26:03 localhost multipathd[215756]: --------shut down------- Feb 23 04:26:03 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Feb 23 04:26:03 localhost systemd[1]: multipathd.service: Deactivated successfully. Feb 23 04:26:03 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Feb 23 04:26:03 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Feb 23 04:26:03 localhost multipathd[217993]: --------start up-------- Feb 23 04:26:03 localhost multipathd[217993]: read /etc/multipath.conf Feb 23 04:26:03 localhost multipathd[217993]: path checkers start up Feb 23 04:26:03 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Feb 23 04:26:04 localhost python3.9[218109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:26:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61435 DF PROTO=TCP SPT=46978 DPT=9105 SEQ=962679652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01211F830000000001030307) Feb 23 04:26:05 localhost python3.9[218223]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:06 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Feb 23 04:26:07 localhost python3.9[218334]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:26:07 localhost systemd[1]: Reloading. Feb 23 04:26:07 localhost systemd-rc-local-generator[218359]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:26:07 localhost systemd-sysv-generator[218364]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:26:07 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:07 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:07 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:07 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:26:07 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:07 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:07 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:07 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:07 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Feb 23 04:26:08 localhost python3.9[218479]: ansible-ansible.builtin.service_facts Invoked Feb 23 04:26:08 localhost network[218496]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 04:26:08 localhost network[218497]: 'network-scripts' will be removed from distribution in near future. Feb 23 04:26:08 localhost network[218498]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 04:26:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61436 DF PROTO=TCP SPT=46978 DPT=9105 SEQ=962679652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01212F430000000001030307) Feb 23 04:26:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:26:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58358 DF PROTO=TCP SPT=51016 DPT=9101 SEQ=1974310853 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012137E00000000001030307) Feb 23 04:26:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37580 DF PROTO=TCP SPT=41582 DPT=9882 SEQ=319836088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012141830000000001030307) Feb 23 04:26:15 localhost python3.9[218731]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:26:16 localhost python3.9[218842]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:26:17 localhost python3.9[218953]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:26:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61437 DF PROTO=TCP SPT=46978 DPT=9105 SEQ=962679652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01214F830000000001030307) Feb 23 04:26:17 localhost systemd[1]: virtqemud.service: Deactivated successfully. Feb 23 04:26:17 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Feb 23 04:26:17 localhost sshd[219028]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:26:18 localhost python3.9[219068]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:26:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:26:19 localhost podman[219098]: 2026-02-23 09:26:19.016146428 +0000 UTC m=+0.084301680 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:26:19 localhost podman[219098]: 2026-02-23 09:26:19.082857112 +0000 UTC m=+0.151012364 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 04:26:19 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:26:19 localhost python3.9[219204]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:26:20 localhost python3.9[219315]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:26:20 localhost python3.9[219426]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:26:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5999 DF PROTO=TCP SPT=57684 DPT=9100 SEQ=651822153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01215E1F0000000001030307) Feb 23 04:26:21 localhost python3.9[219537]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:26:22 localhost python3.9[219648]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2355 DF PROTO=TCP SPT=46966 DPT=9100 SEQ=3005293478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012165840000000001030307) Feb 23 04:26:23 localhost python3.9[219758]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:23 localhost python3.9[219868]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:24 localhost python3.9[219978]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:24 localhost python3.9[220088]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:26:25 localhost systemd[1]: tmp-crun.OxAoGl.mount: Deactivated successfully. Feb 23 04:26:25 localhost podman[220212]: 2026-02-23 09:26:25.482498423 +0000 UTC m=+0.077607412 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:26:25 localhost podman[220212]: 2026-02-23 09:26:25.516940738 +0000 UTC m=+0.112049717 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:26:25 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:26:25 localhost python3.9[220213]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:26 localhost python3.9[220381]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58362 DF PROTO=TCP SPT=51016 DPT=9101 SEQ=1974310853 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012173830000000001030307) Feb 23 04:26:26 localhost python3.9[220505]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:28 localhost python3.9[220633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:29 localhost python3.9[220743]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:29 localhost python3.9[220853]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:30 localhost python3.9[220963]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:31 localhost python3.9[221073]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:31 localhost python3.9[221183]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61819 DF PROTO=TCP SPT=59538 DPT=9105 SEQ=1189959434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012188A30000000001030307) Feb 23 04:26:32 localhost python3.9[221293]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61820 DF PROTO=TCP SPT=59538 DPT=9105 SEQ=1189959434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01218CC30000000001030307) Feb 23 04:26:33 localhost python3.9[221403]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:26:33 localhost python3.9[221513]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:26:34 localhost python3.9[221623]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 23 04:26:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61821 DF PROTO=TCP SPT=59538 DPT=9105 SEQ=1189959434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012194C40000000001030307) Feb 23 04:26:35 localhost python3.9[221733]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:26:35 localhost systemd[1]: Reloading. Feb 23 04:26:35 localhost systemd-rc-local-generator[221751]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:26:35 localhost systemd-sysv-generator[221757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:26:35 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:35 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:35 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:35 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:26:35 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:35 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:35 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:35 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:26:36 localhost python3.9[221879]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:26:37 localhost python3.9[221990]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:26:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61822 DF PROTO=TCP SPT=59538 DPT=9105 SEQ=1189959434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0121A4830000000001030307) Feb 23 04:26:39 localhost python3.9[222101]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:26:39 localhost python3.9[222212]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:26:40 localhost python3.9[222323]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:26:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30452 DF PROTO=TCP SPT=44396 DPT=9101 SEQ=2555359072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0121AD100000000001030307) Feb 23 04:26:42 localhost python3.9[222434]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:26:43 localhost python3.9[222545]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:26:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30454 DF PROTO=TCP SPT=44396 DPT=9101 SEQ=2555359072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0121B9030000000001030307) Feb 23 04:26:45 localhost python3.9[222656]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:26:46 localhost python3.9[222767]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:26:47 localhost python3.9[222877]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:26:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61823 DF PROTO=TCP SPT=59538 DPT=9105 SEQ=1189959434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0121C5830000000001030307) Feb 23 04:26:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:26:48.280 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:26:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:26:48.281 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:26:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:26:48.281 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:26:48 localhost python3.9[222987]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:26:48 localhost python3.9[223097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:26:49 localhost sshd[223169]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:26:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:26:49 localhost podman[223210]: 2026-02-23 09:26:49.461300479 +0000 UTC m=+0.085593679 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:26:49 localhost podman[223210]: 2026-02-23 09:26:49.50008656 +0000 UTC m=+0.124379800 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:26:49 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:26:49 localhost python3.9[223209]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:26:50 localhost python3.9[223344]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:26:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18222 DF PROTO=TCP SPT=39768 DPT=9100 SEQ=4046531265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0121D34F0000000001030307) Feb 23 04:26:51 localhost python3.9[223454]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:26:52 localhost python3.9[223564]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:26:53 localhost python3.9[223674]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:26:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18224 DF PROTO=TCP SPT=39768 DPT=9100 SEQ=4046531265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0121DF430000000001030307) Feb 23 04:26:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:26:56 localhost systemd[1]: tmp-crun.XMTcgN.mount: Deactivated successfully. Feb 23 04:26:56 localhost podman[223692]: 2026-02-23 09:26:56.011780157 +0000 UTC m=+0.088069136 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, container_name=ovn_metadata_agent) Feb 23 04:26:56 localhost podman[223692]: 2026-02-23 09:26:56.041709633 +0000 UTC m=+0.117998542 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:26:56 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:26:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30456 DF PROTO=TCP SPT=44396 DPT=9101 SEQ=2555359072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0121E9830000000001030307) Feb 23 04:27:00 localhost python3.9[223802]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Feb 23 04:27:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1983 DF PROTO=TCP SPT=41392 DPT=9105 SEQ=1450405880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0121FDD30000000001030307) Feb 23 04:27:02 localhost python3.9[223913]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Feb 23 04:27:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1984 DF PROTO=TCP SPT=41392 DPT=9105 SEQ=1450405880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012201C30000000001030307) Feb 23 04:27:03 localhost python3.9[224029]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005626465.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Feb 23 04:27:04 localhost sshd[224055]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:27:04 localhost systemd-logind[759]: New session 54 of user zuul. Feb 23 04:27:04 localhost systemd[1]: Started Session 54 of User zuul. Feb 23 04:27:04 localhost systemd[1]: session-54.scope: Deactivated successfully. Feb 23 04:27:04 localhost systemd-logind[759]: Session 54 logged out. Waiting for processes to exit. Feb 23 04:27:04 localhost systemd-logind[759]: Removed session 54. Feb 23 04:27:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1985 DF PROTO=TCP SPT=41392 DPT=9105 SEQ=1450405880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012209C30000000001030307) Feb 23 04:27:05 localhost python3.9[224166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:27:05 localhost python3.9[224221]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:27:06 localhost python3.9[224329]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:27:06 localhost python3.9[224415]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838826.0805545-2629-97530637282528/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:27:07 localhost python3.9[224523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:27:08 localhost python3.9[224609]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838827.1375167-2629-379693744113/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:27:08 localhost python3.9[224717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:27:09 localhost python3.9[224803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838828.1596158-2629-11707158408035/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:27:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1986 DF PROTO=TCP SPT=41392 DPT=9105 SEQ=1450405880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012219840000000001030307) Feb 23 04:27:09 localhost python3.9[224911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:27:10 localhost python3.9[224997]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771838829.392267-2792-51063973409108/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=95a2c7dca6af5923d2d1d47aee71aa571417ed85 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:27:10 localhost python3.9[225107]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55300 DF PROTO=TCP SPT=36820 DPT=9101 SEQ=2555560889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012222400000000001030307) Feb 23 04:27:11 localhost python3.9[225217]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:12 localhost python3.9[225327]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:27:12 localhost python3.9[225439]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10013 DF PROTO=TCP SPT=58948 DPT=9102 SEQ=3683043886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01222D830000000001030307) Feb 23 04:27:14 localhost python3.9[225547]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:27:15 localhost python3.9[225659]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:16 localhost python3.9[225769]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:27:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1987 DF PROTO=TCP SPT=41392 DPT=9105 SEQ=1450405880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012239830000000001030307) Feb 23 04:27:17 localhost python3.9[225877]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:19 localhost python3.9[226181]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False Feb 23 04:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:27:20 localhost podman[226199]: 2026-02-23 09:27:20.018761107 +0000 UTC m=+0.090288717 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0) Feb 23 04:27:20 localhost podman[226199]: 2026-02-23 09:27:20.12519331 +0000 UTC m=+0.196720900 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:27:20 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:27:20 localhost python3.9[226315]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 23 04:27:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43149 DF PROTO=TCP SPT=47834 DPT=9100 SEQ=1499084772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0122487F0000000001030307) Feb 23 04:27:21 localhost python3[226425]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False Feb 23 04:27:21 localhost podman[226462]: Feb 23 04:27:21 localhost podman[226462]: 2026-02-23 09:27:21.945027337 +0000 UTC m=+0.070545600 container create 3be7d315f599dc812d2d03af87c7fb706be7c60341f29f7efc5a148f2848a784 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=nova_compute_init, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true) Feb 23 04:27:21 localhost podman[226462]: 2026-02-23 09:27:21.913490067 +0000 UTC m=+0.039008370 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 23 04:27:21 localhost python3[226425]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52 --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Feb 23 04:27:22 localhost python3.9[226609]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:27:24 localhost python3.9[226719]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 23 04:27:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43151 DF PROTO=TCP SPT=47834 DPT=9100 SEQ=1499084772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012254840000000001030307) Feb 23 04:27:25 localhost python3.9[226829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:27:25 localhost python3.9[226919]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838844.8614464-3262-163218365794703/.source.yaml _original_basename=.4swp9ld4 follow=False checksum=dde8f4b0d63c380bd7f7596e7df827a8064c101b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55304 DF PROTO=TCP SPT=36820 DPT=9101 SEQ=2555560889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01225D830000000001030307) Feb 23 04:27:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:27:26 localhost systemd[1]: tmp-crun.vOSOsP.mount: Deactivated successfully. Feb 23 04:27:26 localhost podman[227030]: 2026-02-23 09:27:26.781535702 +0000 UTC m=+0.087254314 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216) Feb 23 04:27:26 localhost podman[227030]: 2026-02-23 09:27:26.789798256 +0000 UTC m=+0.095516898 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:27:26 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:27:26 localhost python3.9[227029]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:27 localhost python3.9[227193]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:27:28 localhost python3.9[227335]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:27:28 localhost python3.9[227443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/nova_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838847.7706256-3361-107345810212546/.source.json _original_basename=.t5zh8ti5 follow=False checksum=0018389a48392615f4a8869cad43008a907328ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:29 localhost python3.9[227551]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31611 DF PROTO=TCP SPT=50898 DPT=9105 SEQ=4073563396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012273030000000001030307) Feb 23 04:27:32 localhost python3.9[227855]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False Feb 23 04:27:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31612 DF PROTO=TCP SPT=50898 DPT=9105 SEQ=4073563396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012277030000000001030307) Feb 23 04:27:33 localhost python3.9[227965]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 23 04:27:34 localhost python3[228075]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False Feb 23 04:27:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31613 DF PROTO=TCP SPT=50898 DPT=9105 SEQ=4073563396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01227F030000000001030307) Feb 23 04:27:35 localhost python3[228075]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "72feed39d002da96e9458f5df3225bc8b72f1ae28f906a4ea01e253f86aab9e3",#012 "Digest": "sha256:60339e5e0cd7bfe18718bee79174c18ef91b932586fd96f01b9799d5d120385d",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:60339e5e0cd7bfe18718bee79174c18ef91b932586fd96f01b9799d5d120385d"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-02-23T06:27:42.035349623Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.43.0",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260216",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "8419493e1fd846703d277695e03fc5eb",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1216089983,#012 "VirtualSize": 1216089983,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/239567307c66a10c4dd721df6a9263fcc38501437d275d2b4907c616b635d111/diff:/var/lib/containers/storage/overlay/0455f1f13172510bfb03afa514ad1dc5f28a2039a4c0ae85e44e0cde63814ca4/diff:/var/lib/containers/storage/overlay/882df85a0cf43e46bc799aafd5ff81035654b304c2fef5dbd26c9dd0c2e9fcc3/diff:/var/lib/containers/storage/overlay/d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/7e5a6b3af0e35b266ef2f57ba1f524615772066427004655f3e99c4f9072865c/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/7e5a6b3af0e35b266ef2f57ba1f524615772066427004655f3e99c4f9072865c/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d",#012 "sha256:6eb5d45c6942983139aec78264b4b68bafe46465bb40e2bb4c09e78dad8ba6c0",#012 "sha256:9a59f9675e4fdfdb0eaa24dcce26bed374feef6430ea888b6f5ef1274a95bd90",#012 "sha256:5511acb0625eca242fd47549a8bafd7826358a029c48a9158ddd6fa2b7e0b86d",#012 "sha256:1f1e90f8b2058c74071fe0298f6d20f4d1edbde3bdd940d26fcd35c036f677a8"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.43.0",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260216",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "8419493e1fd846703d277695e03fc5eb",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2026-02-17T01:25:07.246646992Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:d064f128d9bf147a386d5c0e8c2e8a6f698c81fb4e2404e09afe5ef1e1d3b529 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-17T01:25:07.246739119Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260216\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-17T01:25:12.132997501Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-02-23T06:08:39.081651802Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081666472Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081677733Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081688343Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081701553Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081710413Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.413481757Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:09:13.490649497Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Feb 23 04:27:35 localhost podman[228126]: 2026-02-23 09:27:35.220305181 +0000 UTC m=+0.086945654 container remove 87400ab8c7ce6524b30ac3385026a2834a72ee48bf77d3af30bad7182016ccea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, tcib_managed=true, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'a2261a69f76ac41646722c019ecc270e-d8e86b11aed37635c57249fefb951044'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.13) Feb 23 04:27:35 localhost python3[228075]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Feb 23 04:27:35 localhost podman[228139]: Feb 23 04:27:35 localhost podman[228139]: 2026-02-23 09:27:35.325573118 +0000 UTC m=+0.087327726 container create 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=nova_compute, org.label-schema.vendor=CentOS) Feb 23 04:27:35 localhost podman[228139]: 2026-02-23 09:27:35.284105622 +0000 UTC m=+0.045860280 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 23 04:27:35 localhost python3[228075]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52 --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Feb 23 04:27:35 localhost sshd[228284]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:27:36 localhost python3.9[228289]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:27:36 localhost python3.9[228401]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:37 localhost python3.9[228456]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:27:37 localhost python3.9[228565]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771838857.3692687-3595-97825801809008/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:38 localhost python3.9[228620]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:27:38 localhost systemd[1]: Reloading. Feb 23 04:27:38 localhost systemd-sysv-generator[228646]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:27:38 localhost systemd-rc-local-generator[228641]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:27:38 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:38 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:38 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:38 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:27:38 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:38 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:38 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:38 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31614 DF PROTO=TCP SPT=50898 DPT=9105 SEQ=4073563396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01228EC30000000001030307) Feb 23 04:27:39 localhost python3.9[228711]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:27:39 localhost systemd[1]: Reloading. Feb 23 04:27:39 localhost systemd-sysv-generator[228740]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:27:39 localhost systemd-rc-local-generator[228735]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:27:39 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:39 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:39 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:39 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:27:39 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:39 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:39 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:39 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:27:39 localhost systemd[1]: Starting nova_compute container... Feb 23 04:27:40 localhost systemd[1]: tmp-crun.M7gTmu.mount: Deactivated successfully. Feb 23 04:27:40 localhost systemd[1]: Started libcrun container. Feb 23 04:27:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:40 localhost podman[228752]: 2026-02-23 09:27:40.064226984 +0000 UTC m=+0.114255774 container init 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, container_name=nova_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 04:27:40 localhost podman[228752]: 2026-02-23 09:27:40.074212591 +0000 UTC m=+0.124241381 container start 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=nova_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216) Feb 23 04:27:40 localhost podman[228752]: nova_compute Feb 23 04:27:40 localhost nova_compute[228765]: + sudo -E kolla_set_configs Feb 23 04:27:40 localhost systemd[1]: Started nova_compute container. Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Validating config file Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying service configuration files Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Deleting /etc/ceph Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Creating directory /etc/ceph Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/ceph Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Writing out command to execute Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:27:40 localhost nova_compute[228765]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 23 04:27:40 localhost nova_compute[228765]: ++ cat /run_command Feb 23 04:27:40 localhost nova_compute[228765]: + CMD=nova-compute Feb 23 04:27:40 localhost nova_compute[228765]: + ARGS= Feb 23 04:27:40 localhost nova_compute[228765]: + sudo kolla_copy_cacerts Feb 23 04:27:40 localhost nova_compute[228765]: + [[ ! -n '' ]] Feb 23 04:27:40 localhost nova_compute[228765]: + . kolla_extend_start Feb 23 04:27:40 localhost nova_compute[228765]: + echo 'Running command: '\''nova-compute'\''' Feb 23 04:27:40 localhost nova_compute[228765]: Running command: 'nova-compute' Feb 23 04:27:40 localhost nova_compute[228765]: + umask 0022 Feb 23 04:27:40 localhost nova_compute[228765]: + exec nova-compute Feb 23 04:27:41 localhost systemd[1]: tmp-crun.NbRRSm.mount: Deactivated successfully. Feb 23 04:27:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11468 DF PROTO=TCP SPT=41566 DPT=9101 SEQ=3204959210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012297700000000001030307) Feb 23 04:27:41 localhost nova_compute[228765]: 2026-02-23 09:27:41.756 228769 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:27:41 localhost nova_compute[228765]: 2026-02-23 09:27:41.756 228769 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:27:41 localhost nova_compute[228765]: 2026-02-23 09:27:41.757 228769 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:27:41 localhost nova_compute[228765]: 2026-02-23 09:27:41.757 228769 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 23 04:27:41 localhost nova_compute[228765]: 2026-02-23 09:27:41.880 228769 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:27:41 localhost nova_compute[228765]: 2026-02-23 09:27:41.903 228769 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:27:41 localhost nova_compute[228765]: 2026-02-23 09:27:41.903 228769 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 23 04:27:41 localhost python3.9[228887]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.340 228769 INFO nova.virt.driver [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.457 228769 INFO nova.compute.provider_config [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.492 228769 WARNING nova.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.492 228769 DEBUG oslo_concurrency.lockutils [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.493 228769 DEBUG oslo_concurrency.lockutils [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.493 228769 DEBUG oslo_concurrency.lockutils [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.493 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.494 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.494 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.494 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.494 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.494 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.494 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.495 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.495 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.495 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.495 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.495 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.496 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.496 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.496 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.496 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.496 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.496 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.497 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.497 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] console_host = np0005626465.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.497 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.497 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.497 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.498 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.498 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.498 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.498 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.498 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.499 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.499 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.499 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.499 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.499 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.500 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.500 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.500 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.500 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.501 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.501 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] host = np0005626465.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.501 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.501 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.502 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.502 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.502 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.502 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.502 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.503 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.503 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.503 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.503 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.503 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.504 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.504 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.504 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.504 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.504 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.505 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.505 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.505 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.505 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.505 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.505 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.506 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.506 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.506 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.506 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.506 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.506 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.507 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.507 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.507 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.507 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.507 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.508 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.508 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.508 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.508 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.508 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.508 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.509 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.509 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] my_block_storage_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.509 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] my_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.509 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.509 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.510 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.510 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.510 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.510 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.510 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.511 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.511 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.511 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.511 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.511 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.512 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.512 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.512 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.512 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.512 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.512 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.513 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.513 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.513 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.513 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.513 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.514 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.514 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.514 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.514 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.514 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.514 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.515 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.515 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.515 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.515 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.515 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.516 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.516 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.516 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.516 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.516 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.516 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.517 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.517 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.517 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.517 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.517 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.518 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.518 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.518 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.518 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.518 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.518 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.519 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.519 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.519 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.519 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.519 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.520 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.520 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.520 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.520 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.520 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.520 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.521 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.521 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.521 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.521 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.521 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.522 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.522 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.522 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.522 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.522 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.523 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.523 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.523 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.523 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.523 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.524 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.524 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.524 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.524 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.524 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.525 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.525 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.525 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.525 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.525 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.525 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.526 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.526 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.526 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.526 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.526 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.527 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.527 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.527 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.527 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.527 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.528 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.528 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.528 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.528 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.528 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.529 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.529 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.529 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.529 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.529 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.530 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.530 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.530 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.530 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.530 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.530 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.531 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.531 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.531 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.531 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.531 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.532 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.532 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.532 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.532 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.532 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.532 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.533 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.533 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.533 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.533 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.533 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.534 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.534 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.534 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.534 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.534 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.535 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.535 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.535 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.535 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.535 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.535 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.os_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.535 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.536 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.536 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.536 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.536 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.536 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.536 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.536 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.537 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.537 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.537 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.537 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.537 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.537 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.537 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.537 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.538 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.538 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.538 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.538 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.538 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.538 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.538 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.539 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.539 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.539 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.539 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.539 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.539 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.539 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.539 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.540 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.540 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.540 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.540 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.540 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.540 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.540 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.541 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.541 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.541 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.541 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.541 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.541 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.541 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.541 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.542 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.542 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.542 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.542 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.542 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.542 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.542 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.543 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.543 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.543 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.543 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.543 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.543 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.543 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.543 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.544 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.544 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.544 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.544 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.544 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.544 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.544 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.545 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.545 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.545 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.545 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.545 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.545 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.545 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.545 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.546 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.546 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.546 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.546 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.546 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.546 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.546 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.547 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.547 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.547 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.547 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.547 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.547 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.547 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.548 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.548 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.548 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.548 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.548 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.548 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.548 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.549 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.549 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.549 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.549 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.549 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.549 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.549 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.550 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.550 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.550 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.550 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.550 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.550 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.550 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.551 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.551 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.551 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.551 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.551 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.551 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.551 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.552 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.552 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.552 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.552 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.552 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.552 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.552 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.552 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.553 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.553 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.553 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.553 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.553 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.553 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.554 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.554 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.554 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.554 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.554 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.554 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.554 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.555 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.555 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.555 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.555 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.555 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.555 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.555 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.556 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.556 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.556 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.556 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.556 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.556 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.556 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.556 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.557 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.557 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.557 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.557 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.557 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.557 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.557 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.558 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.558 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.558 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.558 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.558 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.558 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.558 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.558 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.559 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.559 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.559 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.barbican_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.559 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.559 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.559 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.559 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.560 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.560 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.560 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.560 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.560 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.560 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.560 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.560 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.561 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.561 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.561 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.561 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.561 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.561 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.562 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.562 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.562 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.562 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.562 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.562 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.562 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.563 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.563 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.563 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.563 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.563 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.563 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.563 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.563 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.564 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.564 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.564 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.564 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.564 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.564 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.564 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.565 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.565 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.565 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.565 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.565 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.565 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.565 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.565 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.566 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.566 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.566 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.566 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.566 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.566 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.567 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.567 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.567 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.567 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.567 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.567 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.568 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.568 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.568 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.568 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.568 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.568 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.568 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.569 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.569 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.569 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.569 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.569 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.569 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.569 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.570 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.570 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.570 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.570 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.570 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.570 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.570 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.571 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.571 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.571 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.571 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.571 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.571 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.571 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.572 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.572 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.572 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.572 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.572 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.572 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.572 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.572 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.573 228769 WARNING oslo_config.cfg [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 23 04:27:42 localhost nova_compute[228765]: live_migration_uri is deprecated for removal in favor of two other options that Feb 23 04:27:42 localhost nova_compute[228765]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 23 04:27:42 localhost nova_compute[228765]: and ``live_migration_inbound_addr`` respectively. Feb 23 04:27:42 localhost nova_compute[228765]: ). Its value may be silently ignored in the future.#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.573 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.573 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.573 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.573 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.574 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.574 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.574 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.574 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.574 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.574 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.574 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.575 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.575 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.575 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.575 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.575 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.575 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.576 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.576 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.rbd_secret_uuid = f1fea371-cb69-578d-a3d0-b5c472a84b46 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.576 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.576 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.576 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.576 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.576 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.577 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.577 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.577 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.577 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.577 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.577 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.577 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.578 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.578 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.578 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.578 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.578 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.578 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.579 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.579 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.579 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.579 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.579 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.579 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.580 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.580 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.580 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.580 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.580 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.580 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.580 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.581 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.581 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.581 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.581 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.581 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.581 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.581 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.582 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.582 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.582 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.582 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.582 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.583 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.583 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.583 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.583 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.583 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.583 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.584 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.584 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.584 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.584 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.584 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.585 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.585 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.585 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.585 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.585 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.585 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.586 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.586 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.586 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.586 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.586 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.587 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.587 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.587 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.587 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.587 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.587 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.587 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.588 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.588 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.588 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.588 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.588 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.588 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.588 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.589 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.589 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.589 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.589 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.589 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.589 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.589 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.589 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.590 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.590 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.590 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.590 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.590 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.590 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.591 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.591 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.591 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.591 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.591 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.591 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.591 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.592 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.592 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.592 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.592 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.592 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.592 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.592 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.592 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.593 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.593 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.593 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.593 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.593 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.593 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.593 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.594 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.594 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.594 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.594 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.594 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.594 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.595 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.595 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.595 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.595 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.595 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.595 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.595 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.596 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.596 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.596 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.596 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.596 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.596 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.596 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.597 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.597 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.597 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.597 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.597 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.597 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.597 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.598 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.598 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.598 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.598 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.598 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.598 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.598 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.599 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.599 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.599 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.599 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.599 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.599 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.599 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.600 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.600 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.600 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.600 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.600 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.601 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.601 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.601 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.601 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.601 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.602 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.602 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.602 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.602 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.602 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.602 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.602 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.602 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.603 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.603 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.603 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.603 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.603 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.603 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.604 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.604 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.604 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.604 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.604 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.604 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.604 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.604 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.605 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.605 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.605 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.605 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.605 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.605 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.605 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.606 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.606 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.606 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.606 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.606 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.606 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.606 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.607 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.607 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.607 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.607 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.607 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.607 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.607 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.607 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.608 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.608 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.608 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.608 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.608 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.608 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.608 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.609 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.609 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.609 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.609 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.609 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.609 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.609 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.610 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.610 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.610 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.610 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.610 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.610 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.610 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.611 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.611 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.611 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.611 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.611 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.611 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.612 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.612 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.612 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.612 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.612 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.612 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.612 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.613 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.613 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.613 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.613 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.613 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.613 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.613 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.614 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.614 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.614 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.614 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.614 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.614 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.614 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.615 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.615 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.615 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.615 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.615 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.615 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.615 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.616 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.616 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.616 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.616 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.616 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.616 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.617 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.617 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.617 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.617 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.617 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.617 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.618 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.618 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.618 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.618 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.618 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.618 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.618 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.619 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.619 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.619 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.619 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.619 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.619 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.620 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.620 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.620 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.620 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.620 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.620 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.620 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.620 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.621 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.621 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.621 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.621 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.621 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.621 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.621 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.622 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.622 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.622 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.622 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.622 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.622 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.622 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.623 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.623 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.623 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.623 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.623 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.623 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.623 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.624 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.624 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.624 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.624 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.624 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.624 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.624 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.625 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.625 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.625 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.625 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.625 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.625 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.625 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.626 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.626 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.626 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.626 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.626 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.626 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.626 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.626 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.627 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.627 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.627 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.627 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.627 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.627 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.627 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.628 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.628 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.628 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.628 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.628 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.628 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.628 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.628 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.629 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.629 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.629 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.629 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.629 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.629 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.629 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.630 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.630 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.630 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.630 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.630 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.630 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.630 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.630 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.631 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.631 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.631 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.631 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.631 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.631 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.631 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.632 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.632 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.632 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.632 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.632 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.632 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.632 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.633 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.633 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.633 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.633 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.633 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.633 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.633 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.633 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.634 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.634 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.634 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.634 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.634 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.634 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.634 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.635 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.635 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.635 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.635 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.635 228769 DEBUG oslo_service.service [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.636 228769 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.783 228769 INFO nova.virt.node [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Determined node identity 9df77b74-d7d6-46a8-93cb-cadec85557a4 from /var/lib/nova/compute_id#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.785 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.786 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.786 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.787 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 23 04:27:42 localhost systemd[1]: Started libvirt QEMU daemon. Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.863 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.868 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.869 228769 INFO nova.virt.libvirt.driver [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Connection event '1' reason 'None'#033[00m Feb 23 04:27:42 localhost nova_compute[228765]: 2026-02-23 09:27:42.888 228769 DEBUG nova.virt.libvirt.volume.mount [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 23 04:27:43 localhost python3.9[229060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:27:43 localhost nova_compute[228765]: 2026-02-23 09:27:43.778 228769 INFO nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Libvirt host capabilities Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 8bb105a9-4892-4676-ace9-e931084902e3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: x86_64 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v4 Feb 23 04:27:43 localhost nova_compute[228765]: AMD Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: tcp Feb 23 04:27:43 localhost nova_compute[228765]: rdma Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 16116612 Feb 23 04:27:43 localhost nova_compute[228765]: 4029153 Feb 23 04:27:43 localhost nova_compute[228765]: 0 Feb 23 04:27:43 localhost nova_compute[228765]: 0 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: selinux Feb 23 04:27:43 localhost nova_compute[228765]: 0 Feb 23 04:27:43 localhost nova_compute[228765]: system_u:system_r:svirt_t:s0 Feb 23 04:27:43 localhost nova_compute[228765]: system_u:system_r:svirt_tcg_t:s0 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: dac Feb 23 04:27:43 localhost nova_compute[228765]: 0 Feb 23 04:27:43 localhost nova_compute[228765]: +107:+107 Feb 23 04:27:43 localhost nova_compute[228765]: +107:+107 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: hvm Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 32 Feb 23 04:27:43 localhost nova_compute[228765]: /usr/libexec/qemu-kvm Feb 23 04:27:43 localhost nova_compute[228765]: pc-i440fx-rhel7.6.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.8.0 Feb 23 04:27:43 localhost nova_compute[228765]: q35 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.6.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.6.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.4.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.5.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.3.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel7.6.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.4.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.2.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.2.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.0.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.0.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.1.0 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: hvm Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 64 Feb 23 04:27:43 localhost nova_compute[228765]: /usr/libexec/qemu-kvm Feb 23 04:27:43 localhost nova_compute[228765]: pc-i440fx-rhel7.6.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.8.0 Feb 23 04:27:43 localhost nova_compute[228765]: q35 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.6.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.6.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.4.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.5.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.3.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel7.6.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.4.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.2.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.2.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.0.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.0.0 Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel8.1.0 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: #033[00m Feb 23 04:27:43 localhost nova_compute[228765]: 2026-02-23 09:27:43.789 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 23 04:27:43 localhost nova_compute[228765]: 2026-02-23 09:27:43.810 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: /usr/libexec/qemu-kvm Feb 23 04:27:43 localhost nova_compute[228765]: kvm Feb 23 04:27:43 localhost nova_compute[228765]: pc-q35-rhel9.8.0 Feb 23 04:27:43 localhost nova_compute[228765]: i686 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: rom Feb 23 04:27:43 localhost nova_compute[228765]: pflash Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: yes Feb 23 04:27:43 localhost nova_compute[228765]: no Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: no Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: on Feb 23 04:27:43 localhost nova_compute[228765]: off Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: on Feb 23 04:27:43 localhost nova_compute[228765]: off Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome Feb 23 04:27:43 localhost nova_compute[228765]: AMD Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 486 Feb 23 04:27:43 localhost nova_compute[228765]: 486-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-noTSX-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: ClearwaterForest Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: ClearwaterForest-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Conroe Feb 23 04:27:43 localhost nova_compute[228765]: Conroe-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Cooperlake Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cooperlake-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cooperlake-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Dhyana Feb 23 04:27:43 localhost nova_compute[228765]: Dhyana-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Dhyana-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Genoa Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Genoa-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Genoa-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-IBPB Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v4 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v5 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Turin Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Turin-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v1 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v2 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-noTSX-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v6 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v7 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: KnightsMill Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: KnightsMill-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G1-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G2 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G2-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G3 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G3-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G4-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G5-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Penryn Feb 23 04:27:43 localhost nova_compute[228765]: Penryn-v1 Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge-v1 Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge-v2 Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SierraForest Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SierraForest-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SierraForest-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SierraForest-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-noTSX-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-noTSX-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Snowridge Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Snowridge-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Snowridge-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Snowridge-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Snowridge-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Westmere Feb 23 04:27:43 localhost nova_compute[228765]: Westmere-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Westmere-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Westmere-v2 Feb 23 04:27:43 localhost nova_compute[228765]: athlon Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: athlon-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: core2duo Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: core2duo-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: coreduo Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: coreduo-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: kvm32 Feb 23 04:27:43 localhost nova_compute[228765]: kvm32-v1 Feb 23 04:27:43 localhost nova_compute[228765]: kvm64 Feb 23 04:27:43 localhost nova_compute[228765]: kvm64-v1 Feb 23 04:27:43 localhost nova_compute[228765]: n270 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: n270-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: pentium Feb 23 04:27:43 localhost nova_compute[228765]: pentium-v1 Feb 23 04:27:43 localhost nova_compute[228765]: pentium2 Feb 23 04:27:43 localhost nova_compute[228765]: pentium2-v1 Feb 23 04:27:43 localhost nova_compute[228765]: pentium3 Feb 23 04:27:43 localhost nova_compute[228765]: pentium3-v1 Feb 23 04:27:43 localhost nova_compute[228765]: phenom Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: phenom-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: qemu32 Feb 23 04:27:43 localhost nova_compute[228765]: qemu32-v1 Feb 23 04:27:43 localhost nova_compute[228765]: qemu64 Feb 23 04:27:43 localhost nova_compute[228765]: qemu64-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: file Feb 23 04:27:43 localhost nova_compute[228765]: anonymous Feb 23 04:27:43 localhost nova_compute[228765]: memfd Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: disk Feb 23 04:27:43 localhost nova_compute[228765]: cdrom Feb 23 04:27:43 localhost nova_compute[228765]: floppy Feb 23 04:27:43 localhost nova_compute[228765]: lun Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: fdc Feb 23 04:27:43 localhost nova_compute[228765]: scsi Feb 23 04:27:43 localhost nova_compute[228765]: virtio Feb 23 04:27:43 localhost nova_compute[228765]: usb Feb 23 04:27:43 localhost nova_compute[228765]: sata Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: virtio Feb 23 04:27:43 localhost nova_compute[228765]: virtio-transitional Feb 23 04:27:43 localhost nova_compute[228765]: virtio-non-transitional Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: vnc Feb 23 04:27:43 localhost nova_compute[228765]: egl-headless Feb 23 04:27:43 localhost nova_compute[228765]: dbus Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: subsystem Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: default Feb 23 04:27:43 localhost nova_compute[228765]: mandatory Feb 23 04:27:43 localhost nova_compute[228765]: requisite Feb 23 04:27:43 localhost nova_compute[228765]: optional Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: usb Feb 23 04:27:43 localhost nova_compute[228765]: pci Feb 23 04:27:43 localhost nova_compute[228765]: scsi Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: virtio Feb 23 04:27:43 localhost nova_compute[228765]: virtio-transitional Feb 23 04:27:43 localhost nova_compute[228765]: virtio-non-transitional Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: random Feb 23 04:27:43 localhost nova_compute[228765]: egd Feb 23 04:27:43 localhost nova_compute[228765]: builtin Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: path Feb 23 04:27:43 localhost nova_compute[228765]: handle Feb 23 04:27:43 localhost nova_compute[228765]: virtiofs Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: tpm-tis Feb 23 04:27:43 localhost nova_compute[228765]: tpm-crb Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: emulator Feb 23 04:27:43 localhost nova_compute[228765]: external Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 2.0 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: usb Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: pty Feb 23 04:27:43 localhost nova_compute[228765]: unix Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: qemu Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: builtin Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: default Feb 23 04:27:43 localhost nova_compute[228765]: passt Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: isa Feb 23 04:27:43 localhost nova_compute[228765]: hyperv Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: null Feb 23 04:27:43 localhost nova_compute[228765]: vc Feb 23 04:27:43 localhost nova_compute[228765]: pty Feb 23 04:27:43 localhost nova_compute[228765]: dev Feb 23 04:27:43 localhost nova_compute[228765]: file Feb 23 04:27:43 localhost nova_compute[228765]: pipe Feb 23 04:27:43 localhost nova_compute[228765]: stdio Feb 23 04:27:43 localhost nova_compute[228765]: udp Feb 23 04:27:43 localhost nova_compute[228765]: tcp Feb 23 04:27:43 localhost nova_compute[228765]: unix Feb 23 04:27:43 localhost nova_compute[228765]: qemu-vdagent Feb 23 04:27:43 localhost nova_compute[228765]: dbus Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: relaxed Feb 23 04:27:43 localhost nova_compute[228765]: vapic Feb 23 04:27:43 localhost nova_compute[228765]: spinlocks Feb 23 04:27:43 localhost nova_compute[228765]: vpindex Feb 23 04:27:43 localhost nova_compute[228765]: runtime Feb 23 04:27:43 localhost nova_compute[228765]: synic Feb 23 04:27:43 localhost nova_compute[228765]: stimer Feb 23 04:27:43 localhost nova_compute[228765]: reset Feb 23 04:27:43 localhost nova_compute[228765]: vendor_id Feb 23 04:27:43 localhost nova_compute[228765]: frequencies Feb 23 04:27:43 localhost nova_compute[228765]: reenlightenment Feb 23 04:27:43 localhost nova_compute[228765]: tlbflush Feb 23 04:27:43 localhost nova_compute[228765]: ipi Feb 23 04:27:43 localhost nova_compute[228765]: avic Feb 23 04:27:43 localhost nova_compute[228765]: emsr_bitmap Feb 23 04:27:43 localhost nova_compute[228765]: xmm_input Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 4095 Feb 23 04:27:43 localhost nova_compute[228765]: on Feb 23 04:27:43 localhost nova_compute[228765]: off Feb 23 04:27:43 localhost nova_compute[228765]: off Feb 23 04:27:43 localhost nova_compute[228765]: Linux KVM Hv Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:27:43 localhost nova_compute[228765]: 2026-02-23 09:27:43.820 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: /usr/libexec/qemu-kvm Feb 23 04:27:43 localhost nova_compute[228765]: kvm Feb 23 04:27:43 localhost nova_compute[228765]: pc-i440fx-rhel7.6.0 Feb 23 04:27:43 localhost nova_compute[228765]: i686 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: rom Feb 23 04:27:43 localhost nova_compute[228765]: pflash Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: yes Feb 23 04:27:43 localhost nova_compute[228765]: no Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: no Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: on Feb 23 04:27:43 localhost nova_compute[228765]: off Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: on Feb 23 04:27:43 localhost nova_compute[228765]: off Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome Feb 23 04:27:43 localhost nova_compute[228765]: AMD Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 486 Feb 23 04:27:43 localhost nova_compute[228765]: 486-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-noTSX-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: ClearwaterForest Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: ClearwaterForest-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Conroe Feb 23 04:27:43 localhost nova_compute[228765]: Conroe-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Cooperlake Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cooperlake-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cooperlake-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Dhyana Feb 23 04:27:43 localhost nova_compute[228765]: Dhyana-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Dhyana-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Genoa Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Genoa-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Genoa-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-IBPB Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v4 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v5 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Turin Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Turin-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v1 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v2 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-noTSX-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v6 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v7 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: KnightsMill Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: KnightsMill-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G1-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G2 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G2-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G3 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G3-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G4-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G5-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Penryn Feb 23 04:27:43 localhost nova_compute[228765]: Penryn-v1 Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge-v1 Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge-v2 Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SierraForest Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SierraForest-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SierraForest-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SierraForest-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-noTSX-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Client-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-noTSX-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Skylake-Server-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Snowridge Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Snowridge-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Snowridge-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Snowridge-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Snowridge-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Westmere Feb 23 04:27:43 localhost nova_compute[228765]: Westmere-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Westmere-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Westmere-v2 Feb 23 04:27:43 localhost nova_compute[228765]: athlon Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: athlon-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: core2duo Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: core2duo-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: coreduo Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: coreduo-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: kvm32 Feb 23 04:27:43 localhost nova_compute[228765]: kvm32-v1 Feb 23 04:27:43 localhost nova_compute[228765]: kvm64 Feb 23 04:27:43 localhost nova_compute[228765]: kvm64-v1 Feb 23 04:27:43 localhost nova_compute[228765]: n270 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: n270-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: pentium Feb 23 04:27:43 localhost nova_compute[228765]: pentium-v1 Feb 23 04:27:43 localhost nova_compute[228765]: pentium2 Feb 23 04:27:43 localhost nova_compute[228765]: pentium2-v1 Feb 23 04:27:43 localhost nova_compute[228765]: pentium3 Feb 23 04:27:43 localhost nova_compute[228765]: pentium3-v1 Feb 23 04:27:43 localhost nova_compute[228765]: phenom Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: phenom-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: qemu32 Feb 23 04:27:43 localhost nova_compute[228765]: qemu32-v1 Feb 23 04:27:43 localhost nova_compute[228765]: qemu64 Feb 23 04:27:43 localhost nova_compute[228765]: qemu64-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: file Feb 23 04:27:43 localhost nova_compute[228765]: anonymous Feb 23 04:27:43 localhost nova_compute[228765]: memfd Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: disk Feb 23 04:27:43 localhost nova_compute[228765]: cdrom Feb 23 04:27:43 localhost nova_compute[228765]: floppy Feb 23 04:27:43 localhost nova_compute[228765]: lun Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: ide Feb 23 04:27:43 localhost nova_compute[228765]: fdc Feb 23 04:27:43 localhost nova_compute[228765]: scsi Feb 23 04:27:43 localhost nova_compute[228765]: virtio Feb 23 04:27:43 localhost nova_compute[228765]: usb Feb 23 04:27:43 localhost nova_compute[228765]: sata Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: virtio Feb 23 04:27:43 localhost nova_compute[228765]: virtio-transitional Feb 23 04:27:43 localhost nova_compute[228765]: virtio-non-transitional Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: vnc Feb 23 04:27:43 localhost nova_compute[228765]: egl-headless Feb 23 04:27:43 localhost nova_compute[228765]: dbus Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: subsystem Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: default Feb 23 04:27:43 localhost nova_compute[228765]: mandatory Feb 23 04:27:43 localhost nova_compute[228765]: requisite Feb 23 04:27:43 localhost nova_compute[228765]: optional Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: usb Feb 23 04:27:43 localhost nova_compute[228765]: pci Feb 23 04:27:43 localhost nova_compute[228765]: scsi Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: virtio Feb 23 04:27:43 localhost nova_compute[228765]: virtio-transitional Feb 23 04:27:43 localhost nova_compute[228765]: virtio-non-transitional Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: random Feb 23 04:27:43 localhost nova_compute[228765]: egd Feb 23 04:27:43 localhost nova_compute[228765]: builtin Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: path Feb 23 04:27:43 localhost nova_compute[228765]: handle Feb 23 04:27:43 localhost nova_compute[228765]: virtiofs Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: tpm-tis Feb 23 04:27:43 localhost nova_compute[228765]: tpm-crb Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: emulator Feb 23 04:27:43 localhost nova_compute[228765]: external Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 2.0 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: usb Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: pty Feb 23 04:27:43 localhost nova_compute[228765]: unix Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: qemu Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: builtin Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: default Feb 23 04:27:43 localhost nova_compute[228765]: passt Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: isa Feb 23 04:27:43 localhost nova_compute[228765]: hyperv Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: null Feb 23 04:27:43 localhost nova_compute[228765]: vc Feb 23 04:27:43 localhost nova_compute[228765]: pty Feb 23 04:27:43 localhost nova_compute[228765]: dev Feb 23 04:27:43 localhost nova_compute[228765]: file Feb 23 04:27:43 localhost nova_compute[228765]: pipe Feb 23 04:27:43 localhost nova_compute[228765]: stdio Feb 23 04:27:43 localhost nova_compute[228765]: udp Feb 23 04:27:43 localhost nova_compute[228765]: tcp Feb 23 04:27:43 localhost nova_compute[228765]: unix Feb 23 04:27:43 localhost nova_compute[228765]: qemu-vdagent Feb 23 04:27:43 localhost nova_compute[228765]: dbus Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: relaxed Feb 23 04:27:43 localhost nova_compute[228765]: vapic Feb 23 04:27:43 localhost nova_compute[228765]: spinlocks Feb 23 04:27:43 localhost nova_compute[228765]: vpindex Feb 23 04:27:43 localhost nova_compute[228765]: runtime Feb 23 04:27:43 localhost nova_compute[228765]: synic Feb 23 04:27:43 localhost nova_compute[228765]: stimer Feb 23 04:27:43 localhost nova_compute[228765]: reset Feb 23 04:27:43 localhost nova_compute[228765]: vendor_id Feb 23 04:27:43 localhost nova_compute[228765]: frequencies Feb 23 04:27:43 localhost nova_compute[228765]: reenlightenment Feb 23 04:27:43 localhost nova_compute[228765]: tlbflush Feb 23 04:27:43 localhost nova_compute[228765]: ipi Feb 23 04:27:43 localhost nova_compute[228765]: avic Feb 23 04:27:43 localhost nova_compute[228765]: emsr_bitmap Feb 23 04:27:43 localhost nova_compute[228765]: xmm_input Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 4095 Feb 23 04:27:43 localhost nova_compute[228765]: on Feb 23 04:27:43 localhost nova_compute[228765]: off Feb 23 04:27:43 localhost nova_compute[228765]: off Feb 23 04:27:43 localhost nova_compute[228765]: Linux KVM Hv Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:27:43 localhost nova_compute[228765]: 2026-02-23 09:27:43.887 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 23 04:27:43 localhost nova_compute[228765]: 2026-02-23 09:27:43.892 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: /usr/libexec/qemu-kvm Feb 23 04:27:43 localhost nova_compute[228765]: kvm Feb 23 04:27:43 localhost nova_compute[228765]: pc-i440fx-rhel7.6.0 Feb 23 04:27:43 localhost nova_compute[228765]: x86_64 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: rom Feb 23 04:27:43 localhost nova_compute[228765]: pflash Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: yes Feb 23 04:27:43 localhost nova_compute[228765]: no Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: no Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: on Feb 23 04:27:43 localhost nova_compute[228765]: off Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: on Feb 23 04:27:43 localhost nova_compute[228765]: off Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome Feb 23 04:27:43 localhost nova_compute[228765]: AMD Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: 486 Feb 23 04:27:43 localhost nova_compute[228765]: 486-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-noTSX-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Broadwell-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cascadelake-Server-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: ClearwaterForest Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: ClearwaterForest-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Conroe Feb 23 04:27:43 localhost nova_compute[228765]: Conroe-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Cooperlake Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cooperlake-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Cooperlake-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Denverton-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Dhyana Feb 23 04:27:43 localhost nova_compute[228765]: Dhyana-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Dhyana-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Genoa Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Genoa-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Genoa-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-IBPB Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Milan-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v4 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Rome-v5 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Turin Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-Turin-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v1 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v2 Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: EPYC-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: GraniteRapids-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-noTSX-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Haswell-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-noTSX Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v6 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Icelake-Server-v7 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: IvyBridge-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: KnightsMill Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: KnightsMill-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Nehalem-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G1-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G2 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G2-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G3 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G3-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G4 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G4-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G5 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Opteron_G5-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Penryn Feb 23 04:27:43 localhost nova_compute[228765]: Penryn-v1 Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge-IBRS Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge-v1 Feb 23 04:27:43 localhost nova_compute[228765]: SandyBridge-v2 Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v1 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v2 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: SapphireRapids-v3 Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:43 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SapphireRapids-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SierraForest Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SierraForest-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SierraForest-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SierraForest-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-noTSX-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-noTSX-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-v5 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Snowridge Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Snowridge-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Snowridge-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Snowridge-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Snowridge-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Westmere Feb 23 04:27:44 localhost nova_compute[228765]: Westmere-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Westmere-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Westmere-v2 Feb 23 04:27:44 localhost nova_compute[228765]: athlon Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: athlon-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: core2duo Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: core2duo-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: coreduo Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: coreduo-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: kvm32 Feb 23 04:27:44 localhost nova_compute[228765]: kvm32-v1 Feb 23 04:27:44 localhost nova_compute[228765]: kvm64 Feb 23 04:27:44 localhost nova_compute[228765]: kvm64-v1 Feb 23 04:27:44 localhost nova_compute[228765]: n270 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: n270-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: pentium Feb 23 04:27:44 localhost nova_compute[228765]: pentium-v1 Feb 23 04:27:44 localhost nova_compute[228765]: pentium2 Feb 23 04:27:44 localhost nova_compute[228765]: pentium2-v1 Feb 23 04:27:44 localhost nova_compute[228765]: pentium3 Feb 23 04:27:44 localhost nova_compute[228765]: pentium3-v1 Feb 23 04:27:44 localhost nova_compute[228765]: phenom Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: phenom-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: qemu32 Feb 23 04:27:44 localhost nova_compute[228765]: qemu32-v1 Feb 23 04:27:44 localhost nova_compute[228765]: qemu64 Feb 23 04:27:44 localhost nova_compute[228765]: qemu64-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: file Feb 23 04:27:44 localhost nova_compute[228765]: anonymous Feb 23 04:27:44 localhost nova_compute[228765]: memfd Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: disk Feb 23 04:27:44 localhost nova_compute[228765]: cdrom Feb 23 04:27:44 localhost nova_compute[228765]: floppy Feb 23 04:27:44 localhost nova_compute[228765]: lun Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: ide Feb 23 04:27:44 localhost nova_compute[228765]: fdc Feb 23 04:27:44 localhost nova_compute[228765]: scsi Feb 23 04:27:44 localhost nova_compute[228765]: virtio Feb 23 04:27:44 localhost nova_compute[228765]: usb Feb 23 04:27:44 localhost nova_compute[228765]: sata Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: virtio Feb 23 04:27:44 localhost nova_compute[228765]: virtio-transitional Feb 23 04:27:44 localhost nova_compute[228765]: virtio-non-transitional Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: vnc Feb 23 04:27:44 localhost nova_compute[228765]: egl-headless Feb 23 04:27:44 localhost nova_compute[228765]: dbus Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: subsystem Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: default Feb 23 04:27:44 localhost nova_compute[228765]: mandatory Feb 23 04:27:44 localhost nova_compute[228765]: requisite Feb 23 04:27:44 localhost nova_compute[228765]: optional Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: usb Feb 23 04:27:44 localhost nova_compute[228765]: pci Feb 23 04:27:44 localhost nova_compute[228765]: scsi Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: virtio Feb 23 04:27:44 localhost nova_compute[228765]: virtio-transitional Feb 23 04:27:44 localhost nova_compute[228765]: virtio-non-transitional Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: random Feb 23 04:27:44 localhost nova_compute[228765]: egd Feb 23 04:27:44 localhost nova_compute[228765]: builtin Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: path Feb 23 04:27:44 localhost nova_compute[228765]: handle Feb 23 04:27:44 localhost nova_compute[228765]: virtiofs Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: tpm-tis Feb 23 04:27:44 localhost nova_compute[228765]: tpm-crb Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: emulator Feb 23 04:27:44 localhost nova_compute[228765]: external Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: 2.0 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: usb Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: pty Feb 23 04:27:44 localhost nova_compute[228765]: unix Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: qemu Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: builtin Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: default Feb 23 04:27:44 localhost nova_compute[228765]: passt Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: isa Feb 23 04:27:44 localhost nova_compute[228765]: hyperv Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: null Feb 23 04:27:44 localhost nova_compute[228765]: vc Feb 23 04:27:44 localhost nova_compute[228765]: pty Feb 23 04:27:44 localhost nova_compute[228765]: dev Feb 23 04:27:44 localhost nova_compute[228765]: file Feb 23 04:27:44 localhost nova_compute[228765]: pipe Feb 23 04:27:44 localhost nova_compute[228765]: stdio Feb 23 04:27:44 localhost nova_compute[228765]: udp Feb 23 04:27:44 localhost nova_compute[228765]: tcp Feb 23 04:27:44 localhost nova_compute[228765]: unix Feb 23 04:27:44 localhost nova_compute[228765]: qemu-vdagent Feb 23 04:27:44 localhost nova_compute[228765]: dbus Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: relaxed Feb 23 04:27:44 localhost nova_compute[228765]: vapic Feb 23 04:27:44 localhost nova_compute[228765]: spinlocks Feb 23 04:27:44 localhost nova_compute[228765]: vpindex Feb 23 04:27:44 localhost nova_compute[228765]: runtime Feb 23 04:27:44 localhost nova_compute[228765]: synic Feb 23 04:27:44 localhost nova_compute[228765]: stimer Feb 23 04:27:44 localhost nova_compute[228765]: reset Feb 23 04:27:44 localhost nova_compute[228765]: vendor_id Feb 23 04:27:44 localhost nova_compute[228765]: frequencies Feb 23 04:27:44 localhost nova_compute[228765]: reenlightenment Feb 23 04:27:44 localhost nova_compute[228765]: tlbflush Feb 23 04:27:44 localhost nova_compute[228765]: ipi Feb 23 04:27:44 localhost nova_compute[228765]: avic Feb 23 04:27:44 localhost nova_compute[228765]: emsr_bitmap Feb 23 04:27:44 localhost nova_compute[228765]: xmm_input Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: 4095 Feb 23 04:27:44 localhost nova_compute[228765]: on Feb 23 04:27:44 localhost nova_compute[228765]: off Feb 23 04:27:44 localhost nova_compute[228765]: off Feb 23 04:27:44 localhost nova_compute[228765]: Linux KVM Hv Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:43.995 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: /usr/libexec/qemu-kvm Feb 23 04:27:44 localhost nova_compute[228765]: kvm Feb 23 04:27:44 localhost nova_compute[228765]: pc-q35-rhel9.8.0 Feb 23 04:27:44 localhost nova_compute[228765]: x86_64 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: efi Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 23 04:27:44 localhost nova_compute[228765]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 23 04:27:44 localhost nova_compute[228765]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 23 04:27:44 localhost nova_compute[228765]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: rom Feb 23 04:27:44 localhost nova_compute[228765]: pflash Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: yes Feb 23 04:27:44 localhost nova_compute[228765]: no Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: yes Feb 23 04:27:44 localhost nova_compute[228765]: no Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: on Feb 23 04:27:44 localhost nova_compute[228765]: off Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: on Feb 23 04:27:44 localhost nova_compute[228765]: off Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Rome Feb 23 04:27:44 localhost nova_compute[228765]: AMD Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: 486 Feb 23 04:27:44 localhost nova_compute[228765]: 486-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Broadwell Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Broadwell-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Broadwell-noTSX Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Broadwell-noTSX-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Broadwell-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Broadwell-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Broadwell-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Broadwell-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Cascadelake-Server Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Cascadelake-Server-noTSX Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Cascadelake-Server-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Cascadelake-Server-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Cascadelake-Server-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Cascadelake-Server-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Cascadelake-Server-v5 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: ClearwaterForest Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: ClearwaterForest-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Conroe Feb 23 04:27:44 localhost nova_compute[228765]: Conroe-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Cooperlake Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Cooperlake-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Cooperlake-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Denverton Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Denverton-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Denverton-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Denverton-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Dhyana Feb 23 04:27:44 localhost nova_compute[228765]: Dhyana-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Dhyana-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Genoa Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Genoa-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Genoa-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-IBPB Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Milan Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Milan-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Milan-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Milan-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Rome Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Rome-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Rome-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Rome-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Rome-v4 Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Rome-v5 Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Turin Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-Turin-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-v1 Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-v2 Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: EPYC-v5 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: GraniteRapids Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: GraniteRapids-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: GraniteRapids-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: GraniteRapids-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Haswell Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Haswell-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Haswell-noTSX Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Haswell-noTSX-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Haswell-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Haswell-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Haswell-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Haswell-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Icelake-Server Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Icelake-Server-noTSX Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Icelake-Server-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Icelake-Server-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Icelake-Server-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Icelake-Server-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Icelake-Server-v5 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Icelake-Server-v6 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Icelake-Server-v7 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: IvyBridge Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: IvyBridge-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: IvyBridge-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: IvyBridge-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: KnightsMill Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: KnightsMill-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Nehalem Feb 23 04:27:44 localhost nova_compute[228765]: Nehalem-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Nehalem-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Nehalem-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Opteron_G1 Feb 23 04:27:44 localhost nova_compute[228765]: Opteron_G1-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Opteron_G2 Feb 23 04:27:44 localhost nova_compute[228765]: Opteron_G2-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Opteron_G3 Feb 23 04:27:44 localhost nova_compute[228765]: Opteron_G3-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Opteron_G4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Opteron_G4-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Opteron_G5 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Opteron_G5-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Penryn Feb 23 04:27:44 localhost nova_compute[228765]: Penryn-v1 Feb 23 04:27:44 localhost nova_compute[228765]: SandyBridge Feb 23 04:27:44 localhost nova_compute[228765]: SandyBridge-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: SandyBridge-v1 Feb 23 04:27:44 localhost nova_compute[228765]: SandyBridge-v2 Feb 23 04:27:44 localhost nova_compute[228765]: SapphireRapids Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SapphireRapids-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SapphireRapids-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SapphireRapids-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SapphireRapids-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SierraForest Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SierraForest-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SierraForest-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: SierraForest-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-noTSX-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Client-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-noTSX-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Skylake-Server-v5 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Snowridge Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Snowridge-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Snowridge-v2 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Snowridge-v3 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Snowridge-v4 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Westmere Feb 23 04:27:44 localhost nova_compute[228765]: Westmere-IBRS Feb 23 04:27:44 localhost nova_compute[228765]: Westmere-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Westmere-v2 Feb 23 04:27:44 localhost nova_compute[228765]: athlon Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: athlon-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: core2duo Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: core2duo-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: coreduo Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: coreduo-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: kvm32 Feb 23 04:27:44 localhost nova_compute[228765]: kvm32-v1 Feb 23 04:27:44 localhost nova_compute[228765]: kvm64 Feb 23 04:27:44 localhost nova_compute[228765]: kvm64-v1 Feb 23 04:27:44 localhost nova_compute[228765]: n270 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: n270-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: pentium Feb 23 04:27:44 localhost nova_compute[228765]: pentium-v1 Feb 23 04:27:44 localhost nova_compute[228765]: pentium2 Feb 23 04:27:44 localhost nova_compute[228765]: pentium2-v1 Feb 23 04:27:44 localhost nova_compute[228765]: pentium3 Feb 23 04:27:44 localhost nova_compute[228765]: pentium3-v1 Feb 23 04:27:44 localhost nova_compute[228765]: phenom Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: phenom-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: qemu32 Feb 23 04:27:44 localhost nova_compute[228765]: qemu32-v1 Feb 23 04:27:44 localhost nova_compute[228765]: qemu64 Feb 23 04:27:44 localhost nova_compute[228765]: qemu64-v1 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: file Feb 23 04:27:44 localhost nova_compute[228765]: anonymous Feb 23 04:27:44 localhost nova_compute[228765]: memfd Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: disk Feb 23 04:27:44 localhost nova_compute[228765]: cdrom Feb 23 04:27:44 localhost nova_compute[228765]: floppy Feb 23 04:27:44 localhost nova_compute[228765]: lun Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: fdc Feb 23 04:27:44 localhost nova_compute[228765]: scsi Feb 23 04:27:44 localhost nova_compute[228765]: virtio Feb 23 04:27:44 localhost nova_compute[228765]: usb Feb 23 04:27:44 localhost nova_compute[228765]: sata Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: virtio Feb 23 04:27:44 localhost nova_compute[228765]: virtio-transitional Feb 23 04:27:44 localhost nova_compute[228765]: virtio-non-transitional Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: vnc Feb 23 04:27:44 localhost nova_compute[228765]: egl-headless Feb 23 04:27:44 localhost nova_compute[228765]: dbus Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: subsystem Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: default Feb 23 04:27:44 localhost nova_compute[228765]: mandatory Feb 23 04:27:44 localhost nova_compute[228765]: requisite Feb 23 04:27:44 localhost nova_compute[228765]: optional Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: usb Feb 23 04:27:44 localhost nova_compute[228765]: pci Feb 23 04:27:44 localhost nova_compute[228765]: scsi Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: virtio Feb 23 04:27:44 localhost nova_compute[228765]: virtio-transitional Feb 23 04:27:44 localhost nova_compute[228765]: virtio-non-transitional Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: random Feb 23 04:27:44 localhost nova_compute[228765]: egd Feb 23 04:27:44 localhost nova_compute[228765]: builtin Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: path Feb 23 04:27:44 localhost nova_compute[228765]: handle Feb 23 04:27:44 localhost nova_compute[228765]: virtiofs Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: tpm-tis Feb 23 04:27:44 localhost nova_compute[228765]: tpm-crb Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: emulator Feb 23 04:27:44 localhost nova_compute[228765]: external Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: 2.0 Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: usb Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: pty Feb 23 04:27:44 localhost nova_compute[228765]: unix Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: qemu Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: builtin Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: default Feb 23 04:27:44 localhost nova_compute[228765]: passt Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: isa Feb 23 04:27:44 localhost nova_compute[228765]: hyperv Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: null Feb 23 04:27:44 localhost nova_compute[228765]: vc Feb 23 04:27:44 localhost nova_compute[228765]: pty Feb 23 04:27:44 localhost nova_compute[228765]: dev Feb 23 04:27:44 localhost nova_compute[228765]: file Feb 23 04:27:44 localhost nova_compute[228765]: pipe Feb 23 04:27:44 localhost nova_compute[228765]: stdio Feb 23 04:27:44 localhost nova_compute[228765]: udp Feb 23 04:27:44 localhost nova_compute[228765]: tcp Feb 23 04:27:44 localhost nova_compute[228765]: unix Feb 23 04:27:44 localhost nova_compute[228765]: qemu-vdagent Feb 23 04:27:44 localhost nova_compute[228765]: dbus Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: relaxed Feb 23 04:27:44 localhost nova_compute[228765]: vapic Feb 23 04:27:44 localhost nova_compute[228765]: spinlocks Feb 23 04:27:44 localhost nova_compute[228765]: vpindex Feb 23 04:27:44 localhost nova_compute[228765]: runtime Feb 23 04:27:44 localhost nova_compute[228765]: synic Feb 23 04:27:44 localhost nova_compute[228765]: stimer Feb 23 04:27:44 localhost nova_compute[228765]: reset Feb 23 04:27:44 localhost nova_compute[228765]: vendor_id Feb 23 04:27:44 localhost nova_compute[228765]: frequencies Feb 23 04:27:44 localhost nova_compute[228765]: reenlightenment Feb 23 04:27:44 localhost nova_compute[228765]: tlbflush Feb 23 04:27:44 localhost nova_compute[228765]: ipi Feb 23 04:27:44 localhost nova_compute[228765]: avic Feb 23 04:27:44 localhost nova_compute[228765]: emsr_bitmap Feb 23 04:27:44 localhost nova_compute[228765]: xmm_input Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: 4095 Feb 23 04:27:44 localhost nova_compute[228765]: on Feb 23 04:27:44 localhost nova_compute[228765]: off Feb 23 04:27:44 localhost nova_compute[228765]: off Feb 23 04:27:44 localhost nova_compute[228765]: Linux KVM Hv Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: Feb 23 04:27:44 localhost nova_compute[228765]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.011 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.011 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.015 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.015 228769 INFO nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Secure Boot support detected#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.017 228769 INFO nova.virt.libvirt.driver [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.017 228769 INFO nova.virt.libvirt.driver [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.028 228769 DEBUG nova.virt.libvirt.driver [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.053 228769 INFO nova.virt.node [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Determined node identity 9df77b74-d7d6-46a8-93cb-cadec85557a4 from /var/lib/nova/compute_id#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.080 228769 DEBUG nova.compute.manager [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Verified node 9df77b74-d7d6-46a8-93cb-cadec85557a4 matches my host np0005626465.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.124 228769 INFO nova.compute.manager [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Feb 23 04:27:44 localhost python3.9[229154]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771838863.1660175-3730-44592370396278/.source.yaml _original_basename=.vqpfev43 follow=False checksum=c0274b4e8da702f77c15fa25b71400f0f9b8a680 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:27:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56868 DF PROTO=TCP SPT=48034 DPT=9882 SEQ=1703582390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0122A3830000000001030307) Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.612 228769 INFO nova.service [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Updating service version for nova-compute on np0005626465.localdomain from 57 to 66#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.663 228769 DEBUG oslo_concurrency.lockutils [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.663 228769 DEBUG oslo_concurrency.lockutils [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.664 228769 DEBUG oslo_concurrency.lockutils [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.664 228769 DEBUG nova.compute.resource_tracker [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:27:44 localhost nova_compute[228765]: 2026-02-23 09:27:44.665 228769 DEBUG oslo_concurrency.processutils [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.122 228769 DEBUG oslo_concurrency.processutils [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:27:45 localhost systemd[1]: Started libvirt nodedev daemon. Feb 23 04:27:45 localhost python3.9[229301]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.460 228769 WARNING nova.virt.libvirt.driver [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.462 228769 DEBUG nova.compute.resource_tracker [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=13613MB free_disk=41.83688735961914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.462 228769 DEBUG oslo_concurrency.lockutils [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.463 228769 DEBUG oslo_concurrency.lockutils [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.619 228769 DEBUG nova.compute.resource_tracker [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.619 228769 DEBUG nova.compute.resource_tracker [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.641 228769 DEBUG nova.scheduler.client.report [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Refreshing inventories for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.663 228769 DEBUG nova.scheduler.client.report [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Updating ProviderTree inventory for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.663 228769 DEBUG nova.compute.provider_tree [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.688 228769 DEBUG nova.scheduler.client.report [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Refreshing aggregate associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.728 228769 DEBUG nova.scheduler.client.report [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Refreshing trait associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_AVX2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_CLMUL,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 23 04:27:45 localhost nova_compute[228765]: 2026-02-23 09:27:45.747 228769 DEBUG oslo_concurrency.processutils [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.203 228769 DEBUG oslo_concurrency.processutils [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.209 228769 DEBUG nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Feb 23 04:27:46 localhost nova_compute[228765]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.210 228769 INFO nova.virt.libvirt.host [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] kernel doesn't support AMD SEV#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.211 228769 DEBUG nova.compute.provider_tree [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.212 228769 DEBUG nova.virt.libvirt.driver [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.238 228769 DEBUG nova.scheduler.client.report [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:27:46 localhost python3.9[229435]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.368 228769 DEBUG nova.compute.provider_tree [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Updating resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.393 228769 DEBUG nova.compute.resource_tracker [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.393 228769 DEBUG oslo_concurrency.lockutils [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.394 228769 DEBUG nova.service [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.472 228769 DEBUG nova.service [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Feb 23 04:27:46 localhost nova_compute[228765]: 2026-02-23 09:27:46.472 228769 DEBUG nova.servicegroup.drivers.db [None req-49fc4a0e-c987-48d4-92b5-2a0c567d3f66 - - - - - -] DB_Driver: join new ServiceGroup member np0005626465.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Feb 23 04:27:47 localhost python3.9[229545]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:27:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31615 DF PROTO=TCP SPT=50898 DPT=9105 SEQ=4073563396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0122AF830000000001030307) Feb 23 04:27:48 localhost python3.9[229655]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 23 04:27:48 localhost systemd-journald[48305]: Field hash table of /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal has a fill level at 122.5 (408 of 333 items), suggesting rotation. Feb 23 04:27:48 localhost systemd-journald[48305]: /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 04:27:48 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 04:27:48 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 04:27:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:27:48.281 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:27:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:27:48.282 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:27:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:27:48.282 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:27:49 localhost python3.9[229790]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:27:49 localhost systemd[1]: Stopping nova_compute container... Feb 23 04:27:50 localhost nova_compute[228765]: 2026-02-23 09:27:50.590 228769 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Feb 23 04:27:50 localhost nova_compute[228765]: 2026-02-23 09:27:50.591 228769 DEBUG oslo_concurrency.lockutils [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:27:50 localhost nova_compute[228765]: 2026-02-23 09:27:50.592 228769 DEBUG oslo_concurrency.lockutils [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:27:50 localhost nova_compute[228765]: 2026-02-23 09:27:50.592 228769 DEBUG oslo_concurrency.lockutils [None req-f612d458-d8ec-49b9-8c3d-32caf7fa58d3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:27:50 localhost systemd[1]: libpod-6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3.scope: Deactivated successfully. Feb 23 04:27:50 localhost journal[228928]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, ) Feb 23 04:27:50 localhost journal[228928]: hostname: np0005626465.localdomain Feb 23 04:27:50 localhost journal[228928]: End of file while reading data: Input/output error Feb 23 04:27:50 localhost systemd[1]: libpod-6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3.scope: Consumed 3.739s CPU time. Feb 23 04:27:50 localhost podman[229794]: 2026-02-23 09:27:50.964080557 +0000 UTC m=+1.859278841 container died 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:27:50 localhost systemd[1]: tmp-crun.dfBEd7.mount: Deactivated successfully. Feb 23 04:27:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3-userdata-shm.mount: Deactivated successfully. Feb 23 04:27:51 localhost podman[229794]: 2026-02-23 09:27:51.017004754 +0000 UTC m=+1.912202988 container cleanup 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Feb 23 04:27:51 localhost podman[229794]: nova_compute Feb 23 04:27:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51014 DF PROTO=TCP SPT=52638 DPT=9100 SEQ=1105944618 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0122BDAF0000000001030307) Feb 23 04:27:51 localhost podman[229850]: error opening file `/run/crun/6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3/status`: No such file or directory Feb 23 04:27:51 localhost podman[229831]: 2026-02-23 09:27:51.125706857 +0000 UTC m=+0.073334226 container cleanup 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:27:51 localhost podman[229831]: nova_compute Feb 23 04:27:51 localhost podman[229806]: 2026-02-23 09:27:51.106282069 +0000 UTC m=+0.180810690 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_controller) Feb 23 04:27:51 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Feb 23 04:27:51 localhost systemd[1]: Stopped nova_compute container. Feb 23 04:27:51 localhost systemd[1]: Starting nova_compute container... Feb 23 04:27:51 localhost podman[229806]: 2026-02-23 09:27:51.206996626 +0000 UTC m=+0.281525187 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:27:51 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:27:51 localhost systemd[1]: Started libcrun container. Feb 23 04:27:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:51 localhost podman[229858]: 2026-02-23 09:27:51.304740262 +0000 UTC m=+0.111110278 container init 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 23 04:27:51 localhost podman[229858]: 2026-02-23 09:27:51.311785019 +0000 UTC m=+0.118155035 container start 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute) Feb 23 04:27:51 localhost podman[229858]: nova_compute Feb 23 04:27:51 localhost nova_compute[229873]: + sudo -E kolla_set_configs Feb 23 04:27:51 localhost systemd[1]: Started nova_compute container. Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Validating config file Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying service configuration files Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Deleting /etc/ceph Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Creating directory /etc/ceph Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/ceph Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Writing out command to execute Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:27:51 localhost nova_compute[229873]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 23 04:27:51 localhost nova_compute[229873]: ++ cat /run_command Feb 23 04:27:51 localhost nova_compute[229873]: + CMD=nova-compute Feb 23 04:27:51 localhost nova_compute[229873]: + ARGS= Feb 23 04:27:51 localhost nova_compute[229873]: + sudo kolla_copy_cacerts Feb 23 04:27:51 localhost nova_compute[229873]: + [[ ! -n '' ]] Feb 23 04:27:51 localhost nova_compute[229873]: + . kolla_extend_start Feb 23 04:27:51 localhost nova_compute[229873]: Running command: 'nova-compute' Feb 23 04:27:51 localhost nova_compute[229873]: + echo 'Running command: '\''nova-compute'\''' Feb 23 04:27:51 localhost nova_compute[229873]: + umask 0022 Feb 23 04:27:51 localhost nova_compute[229873]: + exec nova-compute Feb 23 04:27:52 localhost python3.9[229994]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 23 04:27:52 localhost systemd[1]: Started libpod-conmon-3be7d315f599dc812d2d03af87c7fb706be7c60341f29f7efc5a148f2848a784.scope. Feb 23 04:27:52 localhost systemd[1]: Started libcrun container. Feb 23 04:27:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f76150d43c8060977d68d32e88235268da3a94d92618ab185a37245ce82cad/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f76150d43c8060977d68d32e88235268da3a94d92618ab185a37245ce82cad/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f76150d43c8060977d68d32e88235268da3a94d92618ab185a37245ce82cad/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 04:27:52 localhost podman[230019]: 2026-02-23 09:27:52.402886048 +0000 UTC m=+0.146489085 container init 3be7d315f599dc812d2d03af87c7fb706be7c60341f29f7efc5a148f2848a784 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:27:52 localhost podman[230019]: 2026-02-23 09:27:52.413371441 +0000 UTC m=+0.156974468 container start 3be7d315f599dc812d2d03af87c7fb706be7c60341f29f7efc5a148f2848a784 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible) Feb 23 04:27:52 localhost python3.9[229994]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Applying nova statedir ownership Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/fc52238ffcbdcb325c6bf3fe6412477fc4bdb6cd9151f39289b74f25e08e0db9 Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/f23138a46bc477ec40b895db4322b27384fbb01ccd8da7395c9877132dfb82af Feb 23 04:27:52 localhost nova_compute_init[230038]: INFO:nova_statedir:Nova statedir ownership complete Feb 23 04:27:52 localhost systemd[1]: libpod-3be7d315f599dc812d2d03af87c7fb706be7c60341f29f7efc5a148f2848a784.scope: Deactivated successfully. Feb 23 04:27:52 localhost podman[230039]: 2026-02-23 09:27:52.48815361 +0000 UTC m=+0.052896897 container died 3be7d315f599dc812d2d03af87c7fb706be7c60341f29f7efc5a148f2848a784 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_id=nova_compute_init, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:27:52 localhost podman[230050]: 2026-02-23 09:27:52.557366418 +0000 UTC m=+0.064017810 container cleanup 3be7d315f599dc812d2d03af87c7fb706be7c60341f29f7efc5a148f2848a784 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=nova_compute_init, container_name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:27:52 localhost systemd[1]: libpod-conmon-3be7d315f599dc812d2d03af87c7fb706be7c60341f29f7efc5a148f2848a784.scope: Deactivated successfully. Feb 23 04:27:52 localhost systemd[1]: var-lib-containers-storage-overlay-00f76150d43c8060977d68d32e88235268da3a94d92618ab185a37245ce82cad-merged.mount: Deactivated successfully. Feb 23 04:27:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3be7d315f599dc812d2d03af87c7fb706be7c60341f29f7efc5a148f2848a784-userdata-shm.mount: Deactivated successfully. Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.038 229877 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.038 229877 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.039 229877 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.039 229877 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 23 04:27:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43154 DF PROTO=TCP SPT=47834 DPT=9100 SEQ=1499084772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0122C5830000000001030307) Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.151 229877 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.173 229877 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.173 229877 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.586 229877 INFO nova.virt.driver [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 23 04:27:53 localhost systemd[1]: session-53.scope: Deactivated successfully. Feb 23 04:27:53 localhost systemd[1]: session-53.scope: Consumed 1min 36.009s CPU time. Feb 23 04:27:53 localhost systemd-logind[759]: Session 53 logged out. Waiting for processes to exit. Feb 23 04:27:53 localhost systemd-logind[759]: Removed session 53. Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.704 229877 INFO nova.compute.provider_config [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.726 229877 WARNING nova.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.726 229877 DEBUG oslo_concurrency.lockutils [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.727 229877 DEBUG oslo_concurrency.lockutils [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.727 229877 DEBUG oslo_concurrency.lockutils [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.727 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.727 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.727 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.728 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.728 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.728 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.728 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.728 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.728 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.728 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.729 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.729 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.729 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.729 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.729 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.729 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.730 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.730 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.730 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.730 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] console_host = np0005626465.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.730 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.730 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.730 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.730 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.731 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.731 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.731 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.731 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.731 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.731 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.731 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.732 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.732 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.732 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.732 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.732 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.732 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.732 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.733 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] host = np0005626465.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.733 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.733 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.733 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.733 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.733 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.734 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.734 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.734 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.734 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.734 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.734 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.734 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.735 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.735 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.735 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.735 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.735 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.735 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.735 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.735 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.736 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.736 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.736 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.736 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.736 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.736 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.736 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.737 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.737 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.737 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.737 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.737 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.737 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.737 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.738 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.738 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.738 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.738 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.738 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.738 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.738 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.739 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] my_block_storage_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.739 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] my_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.739 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.739 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.739 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.739 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.739 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.739 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.740 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.740 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.740 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.740 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.740 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.741 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.741 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.741 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.741 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.741 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.741 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.741 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.742 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.742 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.742 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.742 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.742 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.742 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.742 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.742 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.743 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.743 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.743 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.743 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.743 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.743 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.744 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.744 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.744 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.744 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.744 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.744 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.745 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.745 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.745 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.745 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.745 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.745 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.745 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.745 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.746 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.746 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.746 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.746 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.746 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.746 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.746 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.747 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.747 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.747 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.747 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.747 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.747 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.747 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.748 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.748 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.748 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.748 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.748 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.748 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.748 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.748 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.749 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.749 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.749 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.749 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.749 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.749 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.749 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.750 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.750 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.750 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.750 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.750 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.750 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.750 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.751 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.751 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.751 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.751 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.751 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.751 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.751 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.752 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.752 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.752 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.752 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.752 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.752 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.752 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.753 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.753 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.753 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.753 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.753 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.753 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.753 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.754 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.754 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.754 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.754 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.754 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.754 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.754 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.754 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.755 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.755 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.755 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.755 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.755 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.755 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.755 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.756 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.756 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.756 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.756 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.756 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.756 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.756 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.757 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.757 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.757 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.757 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.757 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.757 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.757 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.757 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.758 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.758 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.758 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.758 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.758 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.758 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.758 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.759 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.os_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.759 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.759 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.759 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.759 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.759 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.759 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.759 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.760 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.760 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.760 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.760 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.760 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.760 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.760 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.761 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.761 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.761 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.761 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.761 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.761 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.761 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.762 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.762 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.762 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.762 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.762 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.762 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.762 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.762 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.763 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.763 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.763 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.763 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.763 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.763 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.763 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.763 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.764 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.764 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.764 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.764 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.764 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.764 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.764 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.765 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.765 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.765 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.765 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.765 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.765 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.765 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.765 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.766 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.766 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.766 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.766 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.766 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.766 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.766 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.767 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.767 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.767 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.767 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.767 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.767 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.767 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.768 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.768 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.768 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.768 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.768 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.768 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.768 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.769 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.769 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.769 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.769 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.769 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.769 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.769 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.770 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.770 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.770 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.770 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.770 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.770 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.771 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.771 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.771 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.771 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.771 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.771 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.771 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.772 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.772 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.772 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.772 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.772 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.772 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.772 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.772 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.773 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.773 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.773 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.773 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.773 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.773 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.773 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.774 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.774 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.774 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.774 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.774 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.774 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.774 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.775 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.775 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.775 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.775 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.775 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.775 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.776 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.776 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.776 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.776 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.776 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.776 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.776 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.777 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.777 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.777 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.777 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.777 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.778 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.778 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.778 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.778 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.778 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.778 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.778 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.779 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.779 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.779 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.779 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.779 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.779 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.779 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.780 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.780 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.780 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.780 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.780 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.780 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.780 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.781 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.781 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.781 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.781 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.781 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.781 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.782 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.782 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.782 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.782 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.782 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.782 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.783 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.783 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.783 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.783 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.783 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.783 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.barbican_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.784 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.784 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.784 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.784 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.784 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.784 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.785 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.785 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.785 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.785 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.785 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.785 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.785 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.785 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.786 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.786 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.786 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.786 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.786 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.786 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.786 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.787 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.787 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.787 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.787 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.787 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.787 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.788 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.788 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.788 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.788 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.788 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.788 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.788 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.789 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.789 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.789 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.789 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.789 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.789 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.789 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.789 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.790 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.790 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.790 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.790 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.790 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.790 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.790 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.791 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.791 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.791 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.791 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.791 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.791 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.791 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.792 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.792 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.792 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.792 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.792 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.792 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.793 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.793 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.793 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.793 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.793 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.793 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.793 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.793 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.794 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.794 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.794 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.794 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.794 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.794 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.794 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.795 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.795 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.795 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.795 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.795 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.795 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.795 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.796 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.796 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.796 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.796 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.796 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.796 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.796 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.796 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.797 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.797 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.797 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.797 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.797 229877 WARNING oslo_config.cfg [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 23 04:27:53 localhost nova_compute[229873]: live_migration_uri is deprecated for removal in favor of two other options that Feb 23 04:27:53 localhost nova_compute[229873]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 23 04:27:53 localhost nova_compute[229873]: and ``live_migration_inbound_addr`` respectively. Feb 23 04:27:53 localhost nova_compute[229873]: ). Its value may be silently ignored in the future.#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.797 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.798 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.798 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.798 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.798 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.798 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.798 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.798 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.799 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.799 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.799 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.799 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.799 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.799 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.799 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.800 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.800 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.800 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.800 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.rbd_secret_uuid = f1fea371-cb69-578d-a3d0-b5c472a84b46 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.800 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.800 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.801 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.801 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.801 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.801 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.801 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.801 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.801 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.801 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.802 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.802 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.802 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.802 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.802 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.802 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.803 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.803 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.803 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.803 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.803 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.803 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.803 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.803 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.804 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.804 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.804 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.804 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.804 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.804 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.804 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.805 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.805 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.805 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.805 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.805 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.805 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.806 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.806 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.806 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.806 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.806 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.806 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.806 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.807 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.807 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.807 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.807 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.807 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.807 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.808 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.808 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.808 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.808 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.808 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.808 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.808 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.809 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.809 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.809 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.809 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.809 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.809 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.809 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.810 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.810 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.810 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.810 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.810 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.810 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.810 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.811 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.811 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.811 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.811 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.811 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.811 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.811 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.811 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.812 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.812 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.812 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.812 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.812 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.812 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.812 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.813 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.813 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.813 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.813 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.813 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.813 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.813 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.814 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.814 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.814 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.814 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.814 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.814 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.815 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.815 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.815 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.815 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.815 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.815 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.816 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.816 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.816 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.816 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.816 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.816 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.816 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.817 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.817 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.817 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.817 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.817 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.817 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.817 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.818 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.818 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.818 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.818 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.818 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.818 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.819 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.819 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.819 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.819 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.819 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.819 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.819 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.820 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.820 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.820 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.820 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.820 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.820 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.820 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.821 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.821 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.821 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.821 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.821 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.821 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.821 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.822 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.822 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.822 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.822 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.822 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.822 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.822 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.823 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.823 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.823 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.823 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.823 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.823 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.824 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.824 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.824 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.824 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.824 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.824 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.825 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.825 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.825 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.825 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.825 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.825 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.825 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.826 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.826 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.826 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.826 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.826 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.826 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.826 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.827 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.827 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.827 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.827 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.827 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.827 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.828 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.828 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.828 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.828 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.828 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.828 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.829 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.829 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.829 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.829 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.829 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.829 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.830 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.830 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.830 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.830 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.830 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.830 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.831 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.831 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.831 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.831 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.831 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.832 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.832 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.832 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.832 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.832 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.832 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.833 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.833 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.833 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.833 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.833 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.833 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.834 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.834 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.834 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.834 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.834 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.835 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.835 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.835 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.835 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.836 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.836 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.836 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.836 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.836 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.837 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.837 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.837 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.837 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.837 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.837 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.838 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.838 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.838 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.838 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.838 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.838 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.839 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.839 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.839 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.839 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.839 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.839 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.839 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.840 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.840 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.840 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.840 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.840 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.840 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.841 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.841 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.841 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.841 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.841 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.841 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.842 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.842 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.842 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.842 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.842 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.842 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.843 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.843 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.843 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.843 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.843 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.843 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.844 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.844 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.844 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.844 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.844 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.844 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.844 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.845 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.845 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.845 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.845 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.845 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.845 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.846 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.846 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.846 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.846 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.846 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.846 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.846 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.847 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.847 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.847 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.847 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.847 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.847 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.847 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.848 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.848 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.848 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.848 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.848 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.848 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.848 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.849 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.849 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.849 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.849 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.849 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.849 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.850 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.850 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.850 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.850 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.850 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.850 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.850 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.851 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.851 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.851 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.851 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.851 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.851 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.851 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.851 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.852 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.852 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.852 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.852 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.852 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.852 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.852 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.853 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.853 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.853 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.853 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.853 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.853 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.853 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.854 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.854 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.854 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.854 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.854 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.854 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.854 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.854 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.855 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.855 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.855 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.855 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.855 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.855 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.855 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.856 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.856 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.856 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.856 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.856 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.856 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.856 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.857 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.857 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.857 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.857 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.857 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.857 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.858 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.858 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.858 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.858 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.859 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.859 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.859 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.859 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.859 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.860 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.860 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.860 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.860 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.860 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.860 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.861 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.861 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.861 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.861 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.861 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.861 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.861 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.862 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.862 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.862 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.862 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.862 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.862 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.862 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.863 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.863 229877 DEBUG oslo_service.service [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.864 229877 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.879 229877 INFO nova.virt.node [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Determined node identity 9df77b74-d7d6-46a8-93cb-cadec85557a4 from /var/lib/nova/compute_id#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.879 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.880 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.880 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.880 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.891 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.894 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.895 229877 INFO nova.virt.libvirt.driver [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Connection event '1' reason 'None'#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.901 229877 INFO nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Libvirt host capabilities Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: 8bb105a9-4892-4676-ace9-e931084902e3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: x86_64 Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v4 Feb 23 04:27:53 localhost nova_compute[229873]: AMD Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: tcp Feb 23 04:27:53 localhost nova_compute[229873]: rdma Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: 16116612 Feb 23 04:27:53 localhost nova_compute[229873]: 4029153 Feb 23 04:27:53 localhost nova_compute[229873]: 0 Feb 23 04:27:53 localhost nova_compute[229873]: 0 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: selinux Feb 23 04:27:53 localhost nova_compute[229873]: 0 Feb 23 04:27:53 localhost nova_compute[229873]: system_u:system_r:svirt_t:s0 Feb 23 04:27:53 localhost nova_compute[229873]: system_u:system_r:svirt_tcg_t:s0 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: dac Feb 23 04:27:53 localhost nova_compute[229873]: 0 Feb 23 04:27:53 localhost nova_compute[229873]: +107:+107 Feb 23 04:27:53 localhost nova_compute[229873]: +107:+107 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: hvm Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: 32 Feb 23 04:27:53 localhost nova_compute[229873]: /usr/libexec/qemu-kvm Feb 23 04:27:53 localhost nova_compute[229873]: pc-i440fx-rhel7.6.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.8.0 Feb 23 04:27:53 localhost nova_compute[229873]: q35 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.6.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.6.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.4.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.5.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.3.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel7.6.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.4.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.2.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.2.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.0.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.0.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.1.0 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: hvm Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: 64 Feb 23 04:27:53 localhost nova_compute[229873]: /usr/libexec/qemu-kvm Feb 23 04:27:53 localhost nova_compute[229873]: pc-i440fx-rhel7.6.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.8.0 Feb 23 04:27:53 localhost nova_compute[229873]: q35 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.6.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.6.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.4.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.5.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.3.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel7.6.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.4.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.2.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.2.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.0.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.0.0 Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel8.1.0 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: #033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.906 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.907 229877 DEBUG nova.virt.libvirt.volume.mount [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.912 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: /usr/libexec/qemu-kvm Feb 23 04:27:53 localhost nova_compute[229873]: kvm Feb 23 04:27:53 localhost nova_compute[229873]: pc-i440fx-rhel7.6.0 Feb 23 04:27:53 localhost nova_compute[229873]: i686 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: rom Feb 23 04:27:53 localhost nova_compute[229873]: pflash Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: yes Feb 23 04:27:53 localhost nova_compute[229873]: no Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: no Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: on Feb 23 04:27:53 localhost nova_compute[229873]: off Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: on Feb 23 04:27:53 localhost nova_compute[229873]: off Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome Feb 23 04:27:53 localhost nova_compute[229873]: AMD Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: 486 Feb 23 04:27:53 localhost nova_compute[229873]: 486-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-noTSX Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-noTSX-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-noTSX Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-v5 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: ClearwaterForest Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: ClearwaterForest-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Conroe Feb 23 04:27:53 localhost nova_compute[229873]: Conroe-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Cooperlake Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cooperlake-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cooperlake-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Denverton Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Denverton-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Denverton-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Denverton-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Dhyana Feb 23 04:27:53 localhost nova_compute[229873]: Dhyana-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Dhyana-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Genoa Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Genoa-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Genoa-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-IBPB Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Milan Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Milan-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Milan-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Milan-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v4 Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v5 Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Turin Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Turin-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-v1 Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-v2 Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-v5 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: GraniteRapids Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: GraniteRapids-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: GraniteRapids-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: GraniteRapids-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Haswell Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Haswell-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Haswell-noTSX Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Haswell-noTSX-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Haswell-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Haswell-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Haswell-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Haswell-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Icelake-Server Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Icelake-Server-noTSX Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Icelake-Server-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Icelake-Server-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Icelake-Server-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Icelake-Server-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Icelake-Server-v5 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Icelake-Server-v6 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Icelake-Server-v7 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: IvyBridge Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: IvyBridge-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: IvyBridge-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: IvyBridge-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: KnightsMill Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: KnightsMill-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Nehalem Feb 23 04:27:53 localhost nova_compute[229873]: Nehalem-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Nehalem-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Nehalem-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Opteron_G1 Feb 23 04:27:53 localhost nova_compute[229873]: Opteron_G1-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Opteron_G2 Feb 23 04:27:53 localhost nova_compute[229873]: Opteron_G2-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Opteron_G3 Feb 23 04:27:53 localhost nova_compute[229873]: Opteron_G3-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Opteron_G4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Opteron_G4-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Opteron_G5 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Opteron_G5-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Penryn Feb 23 04:27:53 localhost nova_compute[229873]: Penryn-v1 Feb 23 04:27:53 localhost nova_compute[229873]: SandyBridge Feb 23 04:27:53 localhost nova_compute[229873]: SandyBridge-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: SandyBridge-v1 Feb 23 04:27:53 localhost nova_compute[229873]: SandyBridge-v2 Feb 23 04:27:53 localhost nova_compute[229873]: SapphireRapids Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: SapphireRapids-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: SapphireRapids-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: SapphireRapids-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: SapphireRapids-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: SierraForest Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: SierraForest-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: SierraForest-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: SierraForest-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Client Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Client-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Client-noTSX-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Client-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Client-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Client-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Client-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Server Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Server-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Server-noTSX-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Server-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Server-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Server-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Server-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Skylake-Server-v5 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Snowridge Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Snowridge-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Snowridge-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Snowridge-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Snowridge-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Westmere Feb 23 04:27:53 localhost nova_compute[229873]: Westmere-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Westmere-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Westmere-v2 Feb 23 04:27:53 localhost nova_compute[229873]: athlon Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: athlon-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: core2duo Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: core2duo-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: coreduo Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: coreduo-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: kvm32 Feb 23 04:27:53 localhost nova_compute[229873]: kvm32-v1 Feb 23 04:27:53 localhost nova_compute[229873]: kvm64 Feb 23 04:27:53 localhost nova_compute[229873]: kvm64-v1 Feb 23 04:27:53 localhost nova_compute[229873]: n270 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: n270-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: pentium Feb 23 04:27:53 localhost nova_compute[229873]: pentium-v1 Feb 23 04:27:53 localhost nova_compute[229873]: pentium2 Feb 23 04:27:53 localhost nova_compute[229873]: pentium2-v1 Feb 23 04:27:53 localhost nova_compute[229873]: pentium3 Feb 23 04:27:53 localhost nova_compute[229873]: pentium3-v1 Feb 23 04:27:53 localhost nova_compute[229873]: phenom Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: phenom-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: qemu32 Feb 23 04:27:53 localhost nova_compute[229873]: qemu32-v1 Feb 23 04:27:53 localhost nova_compute[229873]: qemu64 Feb 23 04:27:53 localhost nova_compute[229873]: qemu64-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: file Feb 23 04:27:53 localhost nova_compute[229873]: anonymous Feb 23 04:27:53 localhost nova_compute[229873]: memfd Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: disk Feb 23 04:27:53 localhost nova_compute[229873]: cdrom Feb 23 04:27:53 localhost nova_compute[229873]: floppy Feb 23 04:27:53 localhost nova_compute[229873]: lun Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: ide Feb 23 04:27:53 localhost nova_compute[229873]: fdc Feb 23 04:27:53 localhost nova_compute[229873]: scsi Feb 23 04:27:53 localhost nova_compute[229873]: virtio Feb 23 04:27:53 localhost nova_compute[229873]: usb Feb 23 04:27:53 localhost nova_compute[229873]: sata Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: virtio Feb 23 04:27:53 localhost nova_compute[229873]: virtio-transitional Feb 23 04:27:53 localhost nova_compute[229873]: virtio-non-transitional Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: vnc Feb 23 04:27:53 localhost nova_compute[229873]: egl-headless Feb 23 04:27:53 localhost nova_compute[229873]: dbus Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: subsystem Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: default Feb 23 04:27:53 localhost nova_compute[229873]: mandatory Feb 23 04:27:53 localhost nova_compute[229873]: requisite Feb 23 04:27:53 localhost nova_compute[229873]: optional Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: usb Feb 23 04:27:53 localhost nova_compute[229873]: pci Feb 23 04:27:53 localhost nova_compute[229873]: scsi Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: virtio Feb 23 04:27:53 localhost nova_compute[229873]: virtio-transitional Feb 23 04:27:53 localhost nova_compute[229873]: virtio-non-transitional Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: random Feb 23 04:27:53 localhost nova_compute[229873]: egd Feb 23 04:27:53 localhost nova_compute[229873]: builtin Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: path Feb 23 04:27:53 localhost nova_compute[229873]: handle Feb 23 04:27:53 localhost nova_compute[229873]: virtiofs Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: tpm-tis Feb 23 04:27:53 localhost nova_compute[229873]: tpm-crb Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: emulator Feb 23 04:27:53 localhost nova_compute[229873]: external Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: 2.0 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: usb Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: pty Feb 23 04:27:53 localhost nova_compute[229873]: unix Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: qemu Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: builtin Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: default Feb 23 04:27:53 localhost nova_compute[229873]: passt Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: isa Feb 23 04:27:53 localhost nova_compute[229873]: hyperv Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: null Feb 23 04:27:53 localhost nova_compute[229873]: vc Feb 23 04:27:53 localhost nova_compute[229873]: pty Feb 23 04:27:53 localhost nova_compute[229873]: dev Feb 23 04:27:53 localhost nova_compute[229873]: file Feb 23 04:27:53 localhost nova_compute[229873]: pipe Feb 23 04:27:53 localhost nova_compute[229873]: stdio Feb 23 04:27:53 localhost nova_compute[229873]: udp Feb 23 04:27:53 localhost nova_compute[229873]: tcp Feb 23 04:27:53 localhost nova_compute[229873]: unix Feb 23 04:27:53 localhost nova_compute[229873]: qemu-vdagent Feb 23 04:27:53 localhost nova_compute[229873]: dbus Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: relaxed Feb 23 04:27:53 localhost nova_compute[229873]: vapic Feb 23 04:27:53 localhost nova_compute[229873]: spinlocks Feb 23 04:27:53 localhost nova_compute[229873]: vpindex Feb 23 04:27:53 localhost nova_compute[229873]: runtime Feb 23 04:27:53 localhost nova_compute[229873]: synic Feb 23 04:27:53 localhost nova_compute[229873]: stimer Feb 23 04:27:53 localhost nova_compute[229873]: reset Feb 23 04:27:53 localhost nova_compute[229873]: vendor_id Feb 23 04:27:53 localhost nova_compute[229873]: frequencies Feb 23 04:27:53 localhost nova_compute[229873]: reenlightenment Feb 23 04:27:53 localhost nova_compute[229873]: tlbflush Feb 23 04:27:53 localhost nova_compute[229873]: ipi Feb 23 04:27:53 localhost nova_compute[229873]: avic Feb 23 04:27:53 localhost nova_compute[229873]: emsr_bitmap Feb 23 04:27:53 localhost nova_compute[229873]: xmm_input Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: 4095 Feb 23 04:27:53 localhost nova_compute[229873]: on Feb 23 04:27:53 localhost nova_compute[229873]: off Feb 23 04:27:53 localhost nova_compute[229873]: off Feb 23 04:27:53 localhost nova_compute[229873]: Linux KVM Hv Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:27:53 localhost nova_compute[229873]: 2026-02-23 09:27:53.920 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: /usr/libexec/qemu-kvm Feb 23 04:27:53 localhost nova_compute[229873]: kvm Feb 23 04:27:53 localhost nova_compute[229873]: pc-q35-rhel9.8.0 Feb 23 04:27:53 localhost nova_compute[229873]: i686 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: rom Feb 23 04:27:53 localhost nova_compute[229873]: pflash Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: yes Feb 23 04:27:53 localhost nova_compute[229873]: no Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: no Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: on Feb 23 04:27:53 localhost nova_compute[229873]: off Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: on Feb 23 04:27:53 localhost nova_compute[229873]: off Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome Feb 23 04:27:53 localhost nova_compute[229873]: AMD Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: 486 Feb 23 04:27:53 localhost nova_compute[229873]: 486-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-noTSX Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-noTSX-IBRS Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Broadwell-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-noTSX Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cascadelake-Server-v5 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: ClearwaterForest Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: ClearwaterForest-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Conroe Feb 23 04:27:53 localhost nova_compute[229873]: Conroe-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Cooperlake Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cooperlake-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Cooperlake-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Denverton Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Denverton-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Denverton-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Denverton-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Dhyana Feb 23 04:27:53 localhost nova_compute[229873]: Dhyana-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Dhyana-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Genoa Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Genoa-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Genoa-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-IBPB Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Milan Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Milan-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Milan-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Milan-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v2 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v4 Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Rome-v5 Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Turin Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-Turin-v1 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-v1 Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-v2 Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-v3 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-v4 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: EPYC-v5 Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: GraniteRapids Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:53 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-noTSX Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-noTSX-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-noTSX Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v6 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v7 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: KnightsMill Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: KnightsMill-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G1-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G2 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G2-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G3 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G3-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G4-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G5-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Penryn Feb 23 04:27:54 localhost nova_compute[229873]: Penryn-v1 Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge-v1 Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge-v2 Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SierraForest Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SierraForest-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SierraForest-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SierraForest-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-noTSX-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-noTSX-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-v5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Snowridge Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Snowridge-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Snowridge-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Snowridge-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Snowridge-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Westmere Feb 23 04:27:54 localhost nova_compute[229873]: Westmere-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Westmere-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Westmere-v2 Feb 23 04:27:54 localhost nova_compute[229873]: athlon Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: athlon-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: core2duo Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: core2duo-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: coreduo Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: coreduo-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: kvm32 Feb 23 04:27:54 localhost nova_compute[229873]: kvm32-v1 Feb 23 04:27:54 localhost nova_compute[229873]: kvm64 Feb 23 04:27:54 localhost nova_compute[229873]: kvm64-v1 Feb 23 04:27:54 localhost nova_compute[229873]: n270 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: n270-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: pentium Feb 23 04:27:54 localhost nova_compute[229873]: pentium-v1 Feb 23 04:27:54 localhost nova_compute[229873]: pentium2 Feb 23 04:27:54 localhost nova_compute[229873]: pentium2-v1 Feb 23 04:27:54 localhost nova_compute[229873]: pentium3 Feb 23 04:27:54 localhost nova_compute[229873]: pentium3-v1 Feb 23 04:27:54 localhost nova_compute[229873]: phenom Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: phenom-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: qemu32 Feb 23 04:27:54 localhost nova_compute[229873]: qemu32-v1 Feb 23 04:27:54 localhost nova_compute[229873]: qemu64 Feb 23 04:27:54 localhost nova_compute[229873]: qemu64-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: file Feb 23 04:27:54 localhost nova_compute[229873]: anonymous Feb 23 04:27:54 localhost nova_compute[229873]: memfd Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: disk Feb 23 04:27:54 localhost nova_compute[229873]: cdrom Feb 23 04:27:54 localhost nova_compute[229873]: floppy Feb 23 04:27:54 localhost nova_compute[229873]: lun Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: fdc Feb 23 04:27:54 localhost nova_compute[229873]: scsi Feb 23 04:27:54 localhost nova_compute[229873]: virtio Feb 23 04:27:54 localhost nova_compute[229873]: usb Feb 23 04:27:54 localhost nova_compute[229873]: sata Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: virtio Feb 23 04:27:54 localhost nova_compute[229873]: virtio-transitional Feb 23 04:27:54 localhost nova_compute[229873]: virtio-non-transitional Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: vnc Feb 23 04:27:54 localhost nova_compute[229873]: egl-headless Feb 23 04:27:54 localhost nova_compute[229873]: dbus Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: subsystem Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: default Feb 23 04:27:54 localhost nova_compute[229873]: mandatory Feb 23 04:27:54 localhost nova_compute[229873]: requisite Feb 23 04:27:54 localhost nova_compute[229873]: optional Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: usb Feb 23 04:27:54 localhost nova_compute[229873]: pci Feb 23 04:27:54 localhost nova_compute[229873]: scsi Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: virtio Feb 23 04:27:54 localhost nova_compute[229873]: virtio-transitional Feb 23 04:27:54 localhost nova_compute[229873]: virtio-non-transitional Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: random Feb 23 04:27:54 localhost nova_compute[229873]: egd Feb 23 04:27:54 localhost nova_compute[229873]: builtin Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: path Feb 23 04:27:54 localhost nova_compute[229873]: handle Feb 23 04:27:54 localhost nova_compute[229873]: virtiofs Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: tpm-tis Feb 23 04:27:54 localhost nova_compute[229873]: tpm-crb Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: emulator Feb 23 04:27:54 localhost nova_compute[229873]: external Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: 2.0 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: usb Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: pty Feb 23 04:27:54 localhost nova_compute[229873]: unix Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: qemu Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: builtin Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: default Feb 23 04:27:54 localhost nova_compute[229873]: passt Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: isa Feb 23 04:27:54 localhost nova_compute[229873]: hyperv Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: null Feb 23 04:27:54 localhost nova_compute[229873]: vc Feb 23 04:27:54 localhost nova_compute[229873]: pty Feb 23 04:27:54 localhost nova_compute[229873]: dev Feb 23 04:27:54 localhost nova_compute[229873]: file Feb 23 04:27:54 localhost nova_compute[229873]: pipe Feb 23 04:27:54 localhost nova_compute[229873]: stdio Feb 23 04:27:54 localhost nova_compute[229873]: udp Feb 23 04:27:54 localhost nova_compute[229873]: tcp Feb 23 04:27:54 localhost nova_compute[229873]: unix Feb 23 04:27:54 localhost nova_compute[229873]: qemu-vdagent Feb 23 04:27:54 localhost nova_compute[229873]: dbus Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: relaxed Feb 23 04:27:54 localhost nova_compute[229873]: vapic Feb 23 04:27:54 localhost nova_compute[229873]: spinlocks Feb 23 04:27:54 localhost nova_compute[229873]: vpindex Feb 23 04:27:54 localhost nova_compute[229873]: runtime Feb 23 04:27:54 localhost nova_compute[229873]: synic Feb 23 04:27:54 localhost nova_compute[229873]: stimer Feb 23 04:27:54 localhost nova_compute[229873]: reset Feb 23 04:27:54 localhost nova_compute[229873]: vendor_id Feb 23 04:27:54 localhost nova_compute[229873]: frequencies Feb 23 04:27:54 localhost nova_compute[229873]: reenlightenment Feb 23 04:27:54 localhost nova_compute[229873]: tlbflush Feb 23 04:27:54 localhost nova_compute[229873]: ipi Feb 23 04:27:54 localhost nova_compute[229873]: avic Feb 23 04:27:54 localhost nova_compute[229873]: emsr_bitmap Feb 23 04:27:54 localhost nova_compute[229873]: xmm_input Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: 4095 Feb 23 04:27:54 localhost nova_compute[229873]: on Feb 23 04:27:54 localhost nova_compute[229873]: off Feb 23 04:27:54 localhost nova_compute[229873]: off Feb 23 04:27:54 localhost nova_compute[229873]: Linux KVM Hv Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:27:54 localhost nova_compute[229873]: 2026-02-23 09:27:53.967 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 23 04:27:54 localhost nova_compute[229873]: 2026-02-23 09:27:53.974 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: /usr/libexec/qemu-kvm Feb 23 04:27:54 localhost nova_compute[229873]: kvm Feb 23 04:27:54 localhost nova_compute[229873]: pc-i440fx-rhel7.6.0 Feb 23 04:27:54 localhost nova_compute[229873]: x86_64 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: rom Feb 23 04:27:54 localhost nova_compute[229873]: pflash Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: yes Feb 23 04:27:54 localhost nova_compute[229873]: no Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: no Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: on Feb 23 04:27:54 localhost nova_compute[229873]: off Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: on Feb 23 04:27:54 localhost nova_compute[229873]: off Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome Feb 23 04:27:54 localhost nova_compute[229873]: AMD Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: 486 Feb 23 04:27:54 localhost nova_compute[229873]: 486-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-noTSX Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-noTSX-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-noTSX Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-v5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: ClearwaterForest Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: ClearwaterForest-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Conroe Feb 23 04:27:54 localhost nova_compute[229873]: Conroe-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Cooperlake Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cooperlake-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cooperlake-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Denverton Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Denverton-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Denverton-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Denverton-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Dhyana Feb 23 04:27:54 localhost nova_compute[229873]: Dhyana-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Dhyana-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Genoa Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Genoa-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Genoa-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-IBPB Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Milan Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Milan-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Milan-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Milan-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome-v4 Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome-v5 Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Turin Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Turin-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-v1 Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-v2 Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-v5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-noTSX Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-noTSX-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-noTSX Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v6 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v7 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: KnightsMill Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: KnightsMill-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G1-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G2 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G2-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G3 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G3-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G4-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G5-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Penryn Feb 23 04:27:54 localhost nova_compute[229873]: Penryn-v1 Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge-v1 Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge-v2 Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SierraForest Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SierraForest-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SierraForest-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SierraForest-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-noTSX-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Client-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-noTSX-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Skylake-Server-v5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Snowridge Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Snowridge-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Snowridge-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Snowridge-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Snowridge-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Westmere Feb 23 04:27:54 localhost nova_compute[229873]: Westmere-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Westmere-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Westmere-v2 Feb 23 04:27:54 localhost nova_compute[229873]: athlon Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: athlon-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: core2duo Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: core2duo-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: coreduo Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: coreduo-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: kvm32 Feb 23 04:27:54 localhost nova_compute[229873]: kvm32-v1 Feb 23 04:27:54 localhost nova_compute[229873]: kvm64 Feb 23 04:27:54 localhost nova_compute[229873]: kvm64-v1 Feb 23 04:27:54 localhost nova_compute[229873]: n270 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: n270-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: pentium Feb 23 04:27:54 localhost nova_compute[229873]: pentium-v1 Feb 23 04:27:54 localhost nova_compute[229873]: pentium2 Feb 23 04:27:54 localhost nova_compute[229873]: pentium2-v1 Feb 23 04:27:54 localhost nova_compute[229873]: pentium3 Feb 23 04:27:54 localhost nova_compute[229873]: pentium3-v1 Feb 23 04:27:54 localhost nova_compute[229873]: phenom Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: phenom-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: qemu32 Feb 23 04:27:54 localhost nova_compute[229873]: qemu32-v1 Feb 23 04:27:54 localhost nova_compute[229873]: qemu64 Feb 23 04:27:54 localhost nova_compute[229873]: qemu64-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: file Feb 23 04:27:54 localhost nova_compute[229873]: anonymous Feb 23 04:27:54 localhost nova_compute[229873]: memfd Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: disk Feb 23 04:27:54 localhost nova_compute[229873]: cdrom Feb 23 04:27:54 localhost nova_compute[229873]: floppy Feb 23 04:27:54 localhost nova_compute[229873]: lun Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: ide Feb 23 04:27:54 localhost nova_compute[229873]: fdc Feb 23 04:27:54 localhost nova_compute[229873]: scsi Feb 23 04:27:54 localhost nova_compute[229873]: virtio Feb 23 04:27:54 localhost nova_compute[229873]: usb Feb 23 04:27:54 localhost nova_compute[229873]: sata Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: virtio Feb 23 04:27:54 localhost nova_compute[229873]: virtio-transitional Feb 23 04:27:54 localhost nova_compute[229873]: virtio-non-transitional Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: vnc Feb 23 04:27:54 localhost nova_compute[229873]: egl-headless Feb 23 04:27:54 localhost nova_compute[229873]: dbus Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: subsystem Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: default Feb 23 04:27:54 localhost nova_compute[229873]: mandatory Feb 23 04:27:54 localhost nova_compute[229873]: requisite Feb 23 04:27:54 localhost nova_compute[229873]: optional Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: usb Feb 23 04:27:54 localhost nova_compute[229873]: pci Feb 23 04:27:54 localhost nova_compute[229873]: scsi Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: virtio Feb 23 04:27:54 localhost nova_compute[229873]: virtio-transitional Feb 23 04:27:54 localhost nova_compute[229873]: virtio-non-transitional Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: random Feb 23 04:27:54 localhost nova_compute[229873]: egd Feb 23 04:27:54 localhost nova_compute[229873]: builtin Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: path Feb 23 04:27:54 localhost nova_compute[229873]: handle Feb 23 04:27:54 localhost nova_compute[229873]: virtiofs Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: tpm-tis Feb 23 04:27:54 localhost nova_compute[229873]: tpm-crb Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: emulator Feb 23 04:27:54 localhost nova_compute[229873]: external Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: 2.0 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: usb Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: pty Feb 23 04:27:54 localhost nova_compute[229873]: unix Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: qemu Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: builtin Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: default Feb 23 04:27:54 localhost nova_compute[229873]: passt Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: isa Feb 23 04:27:54 localhost nova_compute[229873]: hyperv Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: null Feb 23 04:27:54 localhost nova_compute[229873]: vc Feb 23 04:27:54 localhost nova_compute[229873]: pty Feb 23 04:27:54 localhost nova_compute[229873]: dev Feb 23 04:27:54 localhost nova_compute[229873]: file Feb 23 04:27:54 localhost nova_compute[229873]: pipe Feb 23 04:27:54 localhost nova_compute[229873]: stdio Feb 23 04:27:54 localhost nova_compute[229873]: udp Feb 23 04:27:54 localhost nova_compute[229873]: tcp Feb 23 04:27:54 localhost nova_compute[229873]: unix Feb 23 04:27:54 localhost nova_compute[229873]: qemu-vdagent Feb 23 04:27:54 localhost nova_compute[229873]: dbus Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: relaxed Feb 23 04:27:54 localhost nova_compute[229873]: vapic Feb 23 04:27:54 localhost nova_compute[229873]: spinlocks Feb 23 04:27:54 localhost nova_compute[229873]: vpindex Feb 23 04:27:54 localhost nova_compute[229873]: runtime Feb 23 04:27:54 localhost nova_compute[229873]: synic Feb 23 04:27:54 localhost nova_compute[229873]: stimer Feb 23 04:27:54 localhost nova_compute[229873]: reset Feb 23 04:27:54 localhost nova_compute[229873]: vendor_id Feb 23 04:27:54 localhost nova_compute[229873]: frequencies Feb 23 04:27:54 localhost nova_compute[229873]: reenlightenment Feb 23 04:27:54 localhost nova_compute[229873]: tlbflush Feb 23 04:27:54 localhost nova_compute[229873]: ipi Feb 23 04:27:54 localhost nova_compute[229873]: avic Feb 23 04:27:54 localhost nova_compute[229873]: emsr_bitmap Feb 23 04:27:54 localhost nova_compute[229873]: xmm_input Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: 4095 Feb 23 04:27:54 localhost nova_compute[229873]: on Feb 23 04:27:54 localhost nova_compute[229873]: off Feb 23 04:27:54 localhost nova_compute[229873]: off Feb 23 04:27:54 localhost nova_compute[229873]: Linux KVM Hv Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:27:54 localhost nova_compute[229873]: 2026-02-23 09:27:54.034 229877 DEBUG nova.virt.libvirt.host [None req-8f1fc543-9118-46d6-a529-99c91c44e585 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: /usr/libexec/qemu-kvm Feb 23 04:27:54 localhost nova_compute[229873]: kvm Feb 23 04:27:54 localhost nova_compute[229873]: pc-q35-rhel9.8.0 Feb 23 04:27:54 localhost nova_compute[229873]: x86_64 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: efi Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 23 04:27:54 localhost nova_compute[229873]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 23 04:27:54 localhost nova_compute[229873]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 23 04:27:54 localhost nova_compute[229873]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: rom Feb 23 04:27:54 localhost nova_compute[229873]: pflash Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: yes Feb 23 04:27:54 localhost nova_compute[229873]: no Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: yes Feb 23 04:27:54 localhost nova_compute[229873]: no Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: on Feb 23 04:27:54 localhost nova_compute[229873]: off Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: on Feb 23 04:27:54 localhost nova_compute[229873]: off Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome Feb 23 04:27:54 localhost nova_compute[229873]: AMD Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: 486 Feb 23 04:27:54 localhost nova_compute[229873]: 486-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-noTSX Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-noTSX-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Broadwell-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-noTSX Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cascadelake-Server-v5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: ClearwaterForest Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: ClearwaterForest-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Conroe Feb 23 04:27:54 localhost nova_compute[229873]: Conroe-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Cooperlake Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cooperlake-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Cooperlake-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Denverton Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Denverton-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Denverton-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Denverton-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Dhyana Feb 23 04:27:54 localhost nova_compute[229873]: Dhyana-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Dhyana-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Genoa Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Genoa-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Genoa-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-IBPB Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Milan Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Milan-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Milan-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Milan-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome-v4 Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Rome-v5 Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Turin Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-Turin-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-v1 Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-v2 Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: EPYC-v5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: GraniteRapids-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-noTSX Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-noTSX-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Haswell-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-noTSX Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v6 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Icelake-Server-v7 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: IvyBridge-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: KnightsMill Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: KnightsMill-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Nehalem-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G1-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G2 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G2-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G3 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G3-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G4-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G5 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Opteron_G5-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Penryn Feb 23 04:27:54 localhost nova_compute[229873]: Penryn-v1 Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge-IBRS Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge-v1 Feb 23 04:27:54 localhost nova_compute[229873]: SandyBridge-v2 Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v1 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v2 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v3 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SapphireRapids-v4 Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: SierraForest Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:27:54 localhost nova_compute[229873]: Feb 23 04:32:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18387 DF PROTO=TCP SPT=49576 DPT=9105 SEQ=3372024491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0126CD830000000001030307) Feb 23 04:32:17 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:32:17 localhost rsyslogd[758]: imjournal: 2646 messages lost due to rate-limiting (20000 allowed within 600 seconds) Feb 23 04:32:17 localhost systemd[1]: var-lib-containers-storage-overlay-110642599c4a6478b0a235c15bbf13038f23c44c5a0846f1feed95010fbbddf0-merged.mount: Deactivated successfully. Feb 23 04:32:17 localhost systemd[1]: var-lib-containers-storage-overlay-110642599c4a6478b0a235c15bbf13038f23c44c5a0846f1feed95010fbbddf0-merged.mount: Deactivated successfully. Feb 23 04:32:18 localhost systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully. Feb 23 04:32:18 localhost systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully. Feb 23 04:32:19 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:19 localhost systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully. Feb 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13435 DF PROTO=TCP SPT=59790 DPT=9100 SEQ=1816587480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0126DC5F0000000001030307) Feb 23 04:32:21 localhost systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully. Feb 23 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:32:21 localhost systemd[1]: var-lib-containers-storage-overlay-777f9dc182dfb66820f6f3bb603d28f3bfe9e16068876bcac539a1ae5bbfe63b-merged.mount: Deactivated successfully. Feb 23 04:32:21 localhost systemd[1]: var-lib-containers-storage-overlay-777f9dc182dfb66820f6f3bb603d28f3bfe9e16068876bcac539a1ae5bbfe63b-merged.mount: Deactivated successfully. Feb 23 04:32:21 localhost podman[247786]: 2026-02-23 09:32:21.81264021 +0000 UTC m=+0.086103010 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:32:21 localhost podman[247786]: 2026-02-23 09:32:21.823999195 +0000 UTC m=+0.097462015 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:32:22 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:32:22 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 23 04:32:23 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:23 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:23 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13437 DF PROTO=TCP SPT=59790 DPT=9100 SEQ=1816587480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0126E8830000000001030307) Feb 23 04:32:24 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 23 04:32:24 localhost systemd[1]: var-lib-containers-storage-overlay-d1d734fa5d6fa4a105819fcbe3ae6278295f7115eb830775cb18f638504a55ec-merged.mount: Deactivated successfully. Feb 23 04:32:24 localhost sshd[247809]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:32:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55642 DF PROTO=TCP SPT=54950 DPT=9101 SEQ=3383520852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0126F1830000000001030307) Feb 23 04:32:26 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:32:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:32:26 localhost systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully. Feb 23 04:32:26 localhost podman[247811]: 2026-02-23 09:32:26.717012936 +0000 UTC m=+0.081099819 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347) Feb 23 04:32:26 localhost podman[247811]: 2026-02-23 09:32:26.731546529 +0000 UTC m=+0.095633392 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 23 04:32:26 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:32:27 localhost systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully. Feb 23 04:32:28 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:32:28 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:32:28 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:32:29 localhost sshd[247830]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:32:29 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:32:29 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:32:29 localhost podman[247832]: 2026-02-23 09:32:29.819843182 +0000 UTC m=+0.072721089 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:32:29 localhost podman[247832]: 2026-02-23 09:32:29.885798839 +0000 UTC m=+0.138676766 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0) Feb 23 04:32:29 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:32:30 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:32:30 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:30 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24103 DF PROTO=TCP SPT=54022 DPT=9105 SEQ=2097557792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012706E50000000001030307) Feb 23 04:32:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24104 DF PROTO=TCP SPT=54022 DPT=9105 SEQ=2097557792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01270B040000000001030307) Feb 23 04:32:33 localhost systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully. Feb 23 04:32:33 localhost systemd[1]: var-lib-containers-storage-overlay-af9b3ccf102020d83d96e7293cd5bee2fb82fa9fff75afdeaef37d9e3f6dc0e2-merged.mount: Deactivated successfully. Feb 23 04:32:33 localhost systemd[1]: var-lib-containers-storage-overlay-af9b3ccf102020d83d96e7293cd5bee2fb82fa9fff75afdeaef37d9e3f6dc0e2-merged.mount: Deactivated successfully. Feb 23 04:32:34 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:34 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 23 04:32:34 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 23 04:32:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24105 DF PROTO=TCP SPT=54022 DPT=9105 SEQ=2097557792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012713030000000001030307) Feb 23 04:32:35 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:35 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:36 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 23 04:32:36 localhost systemd[1]: var-lib-containers-storage-overlay-cb9bf8a2ab62539fad44af6776c5b72ca8a5c1fb07925c54016685be21bc3780-merged.mount: Deactivated successfully. Feb 23 04:32:36 localhost systemd[1]: var-lib-containers-storage-overlay-cb9bf8a2ab62539fad44af6776c5b72ca8a5c1fb07925c54016685be21bc3780-merged.mount: Deactivated successfully. Feb 23 04:32:38 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:32:38 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:32:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:32:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:32:38 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:32:38 localhost podman[247857]: 2026-02-23 09:32:38.677110782 +0000 UTC m=+0.088039755 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216) Feb 23 04:32:38 localhost podman[247857]: 2026-02-23 09:32:38.684859703 +0000 UTC m=+0.095788646 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Feb 23 04:32:38 localhost podman[247858]: 2026-02-23 09:32:38.778759882 +0000 UTC m=+0.188272243 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:32:38 localhost podman[247858]: 2026-02-23 09:32:38.789945696 +0000 UTC m=+0.199458117 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 23 04:32:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24106 DF PROTO=TCP SPT=54022 DPT=9105 SEQ=2097557792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012722C30000000001030307) Feb 23 04:32:40 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:32:40 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:32:40 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:32:40 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:32:40 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:32:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33203 DF PROTO=TCP SPT=58886 DPT=9101 SEQ=2783168609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01272B510000000001030307) Feb 23 04:32:41 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:41 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:32:41 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:32:42 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:42 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:43 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33205 DF PROTO=TCP SPT=58886 DPT=9101 SEQ=2783168609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012737430000000001030307) Feb 23 04:32:45 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:32:45 localhost systemd[1]: var-lib-containers-storage-overlay-a6f754c62077bf37caa8ac647e3f2dd870b797112a74bfb7c91c34f0be7af204-merged.mount: Deactivated successfully. Feb 23 04:32:45 localhost podman[248020]: 2026-02-23 09:32:45.326528425 +0000 UTC m=+0.082558263 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:32:45 localhost podman[248020]: 2026-02-23 09:32:45.364825116 +0000 UTC m=+0.120854964 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:32:45 localhost podman[248020]: unhealthy Feb 23 04:32:45 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:32:45 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Failed with result 'exit-code'. Feb 23 04:32:46 localhost systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully. Feb 23 04:32:46 localhost systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully. Feb 23 04:32:46 localhost systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully. Feb 23 04:32:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2651 DF PROTO=TCP SPT=45518 DPT=9882 SEQ=4240976758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012741830000000001030307) Feb 23 04:32:47 localhost systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully. Feb 23 04:32:47 localhost systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully. Feb 23 04:32:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:32:48.286 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:32:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:32:48.287 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:32:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:32:48.287 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:32:49 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 23 04:32:49 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 23 04:32:49 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 23 04:32:50 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:32:50 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 23 04:32:50 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 23 04:32:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8690 DF PROTO=TCP SPT=36612 DPT=9100 SEQ=3521288889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0127518F0000000001030307) Feb 23 04:32:52 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:52 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:32:52 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:32:52 localhost podman[248061]: 2026-02-23 09:32:52.711389411 +0000 UTC m=+0.089316433 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:32:52 localhost podman[248061]: 2026-02-23 09:32:52.718284496 +0000 UTC m=+0.096211508 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:32:52 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:52 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:52 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:32:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13440 DF PROTO=TCP SPT=59790 DPT=9100 SEQ=1816587480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012759830000000001030307) Feb 23 04:32:53 localhost nova_compute[229873]: 2026-02-23 09:32:53.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:53 localhost nova_compute[229873]: 2026-02-23 09:32:53.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:53 localhost nova_compute[229873]: 2026-02-23 09:32:53.181 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 23 04:32:53 localhost nova_compute[229873]: 2026-02-23 09:32:53.212 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 23 04:32:53 localhost nova_compute[229873]: 2026-02-23 09:32:53.213 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:53 localhost nova_compute[229873]: 2026-02-23 09:32:53.213 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 23 04:32:53 localhost nova_compute[229873]: 2026-02-23 09:32:53.240 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:53 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:54 localhost nova_compute[229873]: 2026-02-23 09:32:54.253 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:54 localhost nova_compute[229873]: 2026-02-23 09:32:54.254 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:54 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 23 04:32:54 localhost systemd[1]: var-lib-containers-storage-overlay-9d9e45d9514da84d6daa9fa4a7ce739f1d3aea192977320b7cd282ec5c552f31-merged.mount: Deactivated successfully. Feb 23 04:32:54 localhost nova_compute[229873]: 2026-02-23 09:32:54.981 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:32:54 localhost nova_compute[229873]: 2026-02-23 09:32:54.981 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:32:54 localhost nova_compute[229873]: 2026-02-23 09:32:54.981 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:32:54 localhost nova_compute[229873]: 2026-02-23 09:32:54.981 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:32:54 localhost nova_compute[229873]: 2026-02-23 09:32:54.981 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.446 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.653 229877 WARNING nova.virt.libvirt.driver [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.656 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=13176MB free_disk=41.83688735961914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.656 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.657 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:32:55 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 23 04:32:55 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.807 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.808 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.825 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Refreshing inventories for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.932 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Updating ProviderTree inventory for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.933 229877 DEBUG nova.compute.provider_tree [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.951 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Refreshing aggregate associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 23 04:32:55 localhost nova_compute[229873]: 2026-02-23 09:32:55.980 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Refreshing trait associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, traits: HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI2,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 23 04:32:56 localhost nova_compute[229873]: 2026-02-23 09:32:55.999 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.086 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.086 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:32:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:32:56 localhost nova_compute[229873]: 2026-02-23 09:32:56.430 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:32:56 localhost nova_compute[229873]: 2026-02-23 09:32:56.435 229877 DEBUG nova.compute.provider_tree [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:32:56 localhost nova_compute[229873]: 2026-02-23 09:32:56.450 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:32:56 localhost nova_compute[229873]: 2026-02-23 09:32:56.453 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:32:56 localhost nova_compute[229873]: 2026-02-23 09:32:56.453 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33207 DF PROTO=TCP SPT=58886 DPT=9101 SEQ=2783168609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012767830000000001030307) Feb 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:32:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:32:57 localhost podman[248128]: 2026-02-23 09:32:57.003180152 +0000 UTC m=+0.078431519 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347) Feb 23 04:32:57 localhost podman[248128]: 2026-02-23 09:32:57.047805512 +0000 UTC m=+0.123056879 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 04:32:57 localhost nova_compute[229873]: 2026-02-23 09:32:57.382 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:57 localhost nova_compute[229873]: 2026-02-23 09:32:57.382 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:57 localhost nova_compute[229873]: 2026-02-23 09:32:57.382 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:32:57 localhost nova_compute[229873]: 2026-02-23 09:32:57.383 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:32:57 localhost nova_compute[229873]: 2026-02-23 09:32:57.403 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:32:57 localhost nova_compute[229873]: 2026-02-23 09:32:57.404 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:57 localhost nova_compute[229873]: 2026-02-23 09:32:57.404 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:57 localhost nova_compute[229873]: 2026-02-23 09:32:57.405 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:57 localhost nova_compute[229873]: 2026-02-23 09:32:57.405 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:32:57 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:32:57 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 23 04:32:57 localhost systemd[1]: var-lib-containers-storage-overlay-b41f8132954705ed575d67743569b519bd384ba1cd0a7dd6be2a8b88add30f0f-merged.mount: Deactivated successfully. Feb 23 04:32:58 localhost nova_compute[229873]: 2026-02-23 09:32:58.182 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 23 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 23 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 23 04:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:33:00 localhost podman[248240]: 2026-02-23 09:33:00.415529276 +0000 UTC m=+0.078073809 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 23 04:33:00 localhost podman[248240]: 2026-02-23 09:33:00.456687443 +0000 UTC m=+0.119231976 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:33:00 localhost python3.9[248241]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:00 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:00 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 23 04:33:00 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 23 04:33:00 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:33:01 localhost python3.9[248376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:01 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:01 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:01 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:01 localhost python3.9[248464]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771839180.916294-3716-193062316996472/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1693 DF PROTO=TCP SPT=37914 DPT=9105 SEQ=329543699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01277C130000000001030307) Feb 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:02 localhost python3.9[248574]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1694 DF PROTO=TCP SPT=37914 DPT=9105 SEQ=329543699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012780030000000001030307) Feb 23 04:33:03 localhost python3.9[248684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:04 localhost python3.9[248741]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:04 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 23 04:33:04 localhost systemd[1]: var-lib-containers-storage-overlay-131bb1452fdde8cff761e538586587f172ee767b4fb401cc162049cabeab656b-merged.mount: Deactivated successfully. Feb 23 04:33:04 localhost python3.9[248851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:04 localhost systemd[1]: var-lib-containers-storage-overlay-f3afd1cf5e6198a170887a65c5f10af446afae7f60b1c2348209fc3be458dddf-merged.mount: Deactivated successfully. Feb 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-9ebf51f80a46e835820a271b66c56bf3153d0ad4226e954d9a4e5952244e92d3-merged.mount: Deactivated successfully. Feb 23 04:33:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1695 DF PROTO=TCP SPT=37914 DPT=9105 SEQ=329543699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012788030000000001030307) Feb 23 04:33:05 localhost python3.9[248908]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ibfm8m5h recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-882df85a0cf43e46bc799aafd5ff81035654b304c2fef5dbd26c9dd0c2e9fcc3-merged.mount: Deactivated successfully. Feb 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-f3afd1cf5e6198a170887a65c5f10af446afae7f60b1c2348209fc3be458dddf-merged.mount: Deactivated successfully. Feb 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-f3afd1cf5e6198a170887a65c5f10af446afae7f60b1c2348209fc3be458dddf-merged.mount: Deactivated successfully. Feb 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d-merged.mount: Deactivated successfully. Feb 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-882df85a0cf43e46bc799aafd5ff81035654b304c2fef5dbd26c9dd0c2e9fcc3-merged.mount: Deactivated successfully. Feb 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-9ebf51f80a46e835820a271b66c56bf3153d0ad4226e954d9a4e5952244e92d3-merged.mount: Deactivated successfully. Feb 23 04:33:06 localhost python3.9[249018]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:07 localhost python3.9[249075]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:08 localhost python3.9[249185]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:33:09 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1696 DF PROTO=TCP SPT=37914 DPT=9105 SEQ=329543699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012797C30000000001030307) Feb 23 04:33:09 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:33:09 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:33:09 localhost python3[249296]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 23 04:33:10 localhost python3.9[249406]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:33:10 localhost systemd[1]: tmp-crun.Ap9Z6s.mount: Deactivated successfully. Feb 23 04:33:10 localhost podman[249464]: 2026-02-23 09:33:10.763563286 +0000 UTC m=+0.084298384 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 04:33:10 localhost podman[249464]: 2026-02-23 09:33:10.793276691 +0000 UTC m=+0.114011779 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 23 04:33:10 localhost podman[249465]: 2026-02-23 09:33:10.816269307 +0000 UTC m=+0.133613094 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 23 04:33:10 localhost podman[249465]: 2026-02-23 09:33:10.850058124 +0000 UTC m=+0.167401941 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.43.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 23 04:33:10 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:10 localhost python3.9[249463]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:11 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:11 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:33:11 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:33:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43782 DF PROTO=TCP SPT=39254 DPT=9101 SEQ=62889346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0127A0800000000001030307) Feb 23 04:33:11 localhost python3.9[249610]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:11 localhost python3.9[249667]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:12 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:12 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:12 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:12 localhost python3.9[249777]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:12 localhost sshd[249780]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:33:13 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:13 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:13 localhost python3.9[249836]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:13 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:13 localhost python3.9[249946]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42515 DF PROTO=TCP SPT=40470 DPT=9102 SEQ=285429290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0127AB830000000001030307) Feb 23 04:33:14 localhost python3.9[250003]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:15 localhost python3.9[250113]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:15 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:33:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:33:15 localhost systemd[1]: var-lib-containers-storage-overlay-f517692be756bbbec8b52ba00fac8538d0b4cc258170a641ad09cd15a7f1f00b-merged.mount: Deactivated successfully. Feb 23 04:33:15 localhost systemd[1]: tmp-crun.wUUOEk.mount: Deactivated successfully. Feb 23 04:33:15 localhost podman[250182]: 2026-02-23 09:33:15.691977194 +0000 UTC m=+0.094982893 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:33:15 localhost podman[250182]: 2026-02-23 09:33:15.700249951 +0000 UTC m=+0.103255680 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:33:15 localhost podman[250182]: unhealthy Feb 23 04:33:15 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:33:15 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Failed with result 'exit-code'. Feb 23 04:33:15 localhost python3.9[250215]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771839194.5726767-4090-224753437291325/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:16 localhost python3.9[250335]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1697 DF PROTO=TCP SPT=37914 DPT=9105 SEQ=329543699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0127B7840000000001030307) Feb 23 04:33:17 localhost python3.9[250445]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:33:17 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:18 localhost systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully. Feb 23 04:33:18 localhost systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully. Feb 23 04:33:19 localhost python3.9[250558]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:20 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:20 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:20 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:20 localhost python3.9[250668]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:33:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24672 DF PROTO=TCP SPT=37544 DPT=9100 SEQ=2228831463 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0127C6BF0000000001030307) Feb 23 04:33:21 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:21 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:21 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:21 localhost python3.9[250779]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:33:22 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:22 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:22 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:22 localhost python3.9[250891]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:33:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:33:23 localhost podman[251005]: 2026-02-23 09:33:23.129378777 +0000 UTC m=+0.060580668 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:33:23 localhost podman[251005]: 2026-02-23 09:33:23.165587046 +0000 UTC m=+0.096788967 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:33:23 localhost python3.9[251004]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:23 localhost systemd[1]: session-55.scope: Deactivated successfully. Feb 23 04:33:23 localhost systemd[1]: session-55.scope: Consumed 1min 32.733s CPU time. Feb 23 04:33:23 localhost systemd-logind[759]: Session 55 logged out. Waiting for processes to exit. Feb 23 04:33:23 localhost systemd-logind[759]: Removed session 55. Feb 23 04:33:25 localhost systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully. Feb 23 04:33:25 localhost systemd[1]: var-lib-containers-storage-overlay-2b64c7c11c525f0f08b1c3a64b5cc69dc38aba5ef31b5a322497f8cdde7fe737-merged.mount: Deactivated successfully. Feb 23 04:33:25 localhost systemd[1]: var-lib-containers-storage-overlay-2b64c7c11c525f0f08b1c3a64b5cc69dc38aba5ef31b5a322497f8cdde7fe737-merged.mount: Deactivated successfully. Feb 23 04:33:25 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:33:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23846 DF PROTO=TCP SPT=56888 DPT=9102 SEQ=2509792483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0127DF840000000001030307) Feb 23 04:33:27 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:33:27 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:33:27 localhost podman[251046]: 2026-02-23 09:33:27.994106037 +0000 UTC m=+0.079587564 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, io.buildah.version=1.33.7, distribution-scope=public, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, version=9.7, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9) Feb 23 04:33:28 localhost podman[251046]: 2026-02-23 09:33:28.008753054 +0000 UTC m=+0.094234541 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, name=ubi9/ubi-minimal, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, release=1770267347, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git) Feb 23 04:33:28 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:33:28 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:33:28 localhost sshd[251065]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:33:28 localhost systemd-logind[759]: New session 56 of user zuul. Feb 23 04:33:28 localhost systemd[1]: Started Session 56 of User zuul. Feb 23 04:33:29 localhost python3.9[251178]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config/container-startup-config/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:29 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:29 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:30 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:30 localhost python3.9[251288]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:30 localhost python3.9[251398]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:33:31 localhost podman[251416]: 2026-02-23 09:33:30.995381719 +0000 UTC m=+0.070478367 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 04:33:31 localhost podman[251416]: 2026-02-23 09:33:31.038793903 +0000 UTC m=+0.113890611 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:33:31 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:31 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:31 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:33:31 localhost python3.9[251530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:31 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:31 localhost openstack_network_exporter[243519]: ERROR 09:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:33:31 localhost openstack_network_exporter[243519]: Feb 23 04:33:32 localhost openstack_network_exporter[243519]: ERROR 09:33:32 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:33:32 localhost openstack_network_exporter[243519]: Feb 23 04:33:32 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:32 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:32 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:32 localhost python3.9[251621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839211.1634984-100-237314969473763/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:33 localhost python3.9[251729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:33 localhost python3.9[251815]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839212.6111965-100-152186796907191/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:34 localhost python3.9[251923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:34 localhost python3.9[252009]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839213.6464794-100-20385077660274/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=e83bb1148d696787bf06807967dee221fc53506e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:34 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:33:34 localhost systemd[1]: var-lib-containers-storage-overlay-2b1624cdf69c40b3be5ac9103e8da1e64c78a26cb80bd614a05a7389840fddb5-merged.mount: Deactivated successfully. Feb 23 04:33:35 localhost python3.9[252117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:35 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:35 localhost systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully. Feb 23 04:33:35 localhost systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully. Feb 23 04:33:36 localhost python3.9[252203]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839215.4082503-274-21679667990715/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=d10c6f671263070bdc94fed977552f121764373c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:36 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:36 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:37 localhost python3.9[252311]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:33:37 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:37 localhost python3.9[252423]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:37 localhost systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully. Feb 23 04:33:37 localhost systemd[1]: var-lib-containers-storage-overlay-7cd0911b56d0bc006c9d6e08afe1eddab8140e9e2da5f495d096c1e34b92a334-merged.mount: Deactivated successfully. Feb 23 04:33:38 localhost python3.9[252533]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:39 localhost python3.9[252590]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:39 localhost python3.9[252700]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:33:40 localhost python3.9[252757]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:33:41 localhost systemd[1]: tmp-crun.2OOUUb.mount: Deactivated successfully. Feb 23 04:33:41 localhost podman[252868]: 2026-02-23 09:33:41.818358023 +0000 UTC m=+0.099628389 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent) Feb 23 04:33:41 localhost podman[252868]: 2026-02-23 09:33:41.823142791 +0000 UTC m=+0.104413187 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:33:41 localhost python3.9[252867]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:41 localhost podman[252869]: 2026-02-23 09:33:41.911469903 +0000 UTC m=+0.189909397 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:33:41 localhost podman[252869]: 2026-02-23 09:33:41.948808432 +0000 UTC m=+0.227247926 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 23 04:33:42 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:42 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8916 DF PROTO=TCP SPT=43010 DPT=9102 SEQ=2316128952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012819660000000001030307) Feb 23 04:33:42 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 23 04:33:42 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:33:42 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:33:42 localhost python3.9[253012]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:43 localhost python3.9[253070]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8917 DF PROTO=TCP SPT=43010 DPT=9102 SEQ=2316128952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01281D830000000001030307) Feb 23 04:33:43 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:43 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:43 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 23 04:33:43 localhost python3.9[253180]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23847 DF PROTO=TCP SPT=56888 DPT=9102 SEQ=2509792483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01281F840000000001030307) Feb 23 04:33:44 localhost python3.9[253237]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:44 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:44 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:44 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:45 localhost python3.9[253347]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:33:45 localhost systemd[1]: Reloading. Feb 23 04:33:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8918 DF PROTO=TCP SPT=43010 DPT=9102 SEQ=2316128952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012825840000000001030307) Feb 23 04:33:45 localhost systemd-sysv-generator[253374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:33:45 localhost systemd-rc-local-generator[253369]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:33:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:33:45 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:45 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:45 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:45 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:33:46 localhost podman[253424]: 2026-02-23 09:33:46.009456946 +0000 UTC m=+0.079514467 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:33:46 localhost podman[253424]: 2026-02-23 09:33:46.017781477 +0000 UTC m=+0.087839048 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:33:46 localhost podman[253424]: unhealthy Feb 23 04:33:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42516 DF PROTO=TCP SPT=40470 DPT=9102 SEQ=285429290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012829830000000001030307) Feb 23 04:33:46 localhost python3.9[253517]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:46 localhost python3.9[253574]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:47 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 23 04:33:47 localhost systemd[1]: var-lib-containers-storage-overlay-ad754dc0ccbca5f94297c9f7b89ba922c9a3f604bb7e57f1a8f4658470b198ae-merged.mount: Deactivated successfully. Feb 23 04:33:47 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Main process exited, code=exited, status=1/FAILURE Feb 23 04:33:47 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Failed with result 'exit-code'. Feb 23 04:33:47 localhost python3.9[253684]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:48 localhost python3.9[253742]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:33:48.286 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:33:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:33:48.287 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:33:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:33:48.288 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:33:48 localhost systemd[1]: var-lib-containers-storage-overlay-4e7c8cf8be5e28661f08c7ae9ca08b0a811b1f296a0663a493871b4299da2d4e-merged.mount: Deactivated successfully. Feb 23 04:33:48 localhost systemd[1]: var-lib-containers-storage-overlay-1962cc6363cc9ac3ab3c2a513bdaec43a309cd0406c08aa2e9112851ab244998-merged.mount: Deactivated successfully. Feb 23 04:33:48 localhost systemd[1]: var-lib-containers-storage-overlay-1962cc6363cc9ac3ab3c2a513bdaec43a309cd0406c08aa2e9112851ab244998-merged.mount: Deactivated successfully. Feb 23 04:33:48 localhost python3.9[253900]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:33:48 localhost systemd[1]: Reloading. Feb 23 04:33:48 localhost systemd-rc-local-generator[253940]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:33:48 localhost systemd-sysv-generator[253946]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:33:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:33:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:33:49 localhost systemd[1]: Starting Create netns directory... Feb 23 04:33:49 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 23 04:33:49 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 23 04:33:49 localhost systemd[1]: Finished Create netns directory. Feb 23 04:33:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8919 DF PROTO=TCP SPT=43010 DPT=9102 SEQ=2316128952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012835440000000001030307) Feb 23 04:33:49 localhost systemd[1]: var-lib-containers-storage-overlay-0455f1f13172510bfb03afa514ad1dc5f28a2039a4c0ae85e44e0cde63814ca4-merged.mount: Deactivated successfully. Feb 23 04:33:49 localhost systemd[1]: var-lib-containers-storage-overlay-4e7c8cf8be5e28661f08c7ae9ca08b0a811b1f296a0663a493871b4299da2d4e-merged.mount: Deactivated successfully. Feb 23 04:33:50 localhost python3.9[254070]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:50 localhost systemd[1]: var-lib-containers-storage-overlay-882df85a0cf43e46bc799aafd5ff81035654b304c2fef5dbd26c9dd0c2e9fcc3-merged.mount: Deactivated successfully. Feb 23 04:33:50 localhost systemd[1]: var-lib-containers-storage-overlay-0455f1f13172510bfb03afa514ad1dc5f28a2039a4c0ae85e44e0cde63814ca4-merged.mount: Deactivated successfully. Feb 23 04:33:50 localhost systemd[1]: var-lib-containers-storage-overlay-0455f1f13172510bfb03afa514ad1dc5f28a2039a4c0ae85e44e0cde63814ca4-merged.mount: Deactivated successfully. Feb 23 04:33:51 localhost python3.9[254180]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:33:51 localhost systemd[1]: var-lib-containers-storage-overlay-d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d-merged.mount: Deactivated successfully. Feb 23 04:33:51 localhost systemd[1]: var-lib-containers-storage-overlay-882df85a0cf43e46bc799aafd5ff81035654b304c2fef5dbd26c9dd0c2e9fcc3-merged.mount: Deactivated successfully. Feb 23 04:33:51 localhost systemd[1]: var-lib-containers-storage-overlay-882df85a0cf43e46bc799aafd5ff81035654b304c2fef5dbd26c9dd0c2e9fcc3-merged.mount: Deactivated successfully. Feb 23 04:33:52 localhost python3.9[254290]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:33:52 localhost systemd[1]: var-lib-containers-storage-overlay-1962cc6363cc9ac3ab3c2a513bdaec43a309cd0406c08aa2e9112851ab244998-merged.mount: Deactivated successfully. Feb 23 04:33:53 localhost python3.9[254396]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771839232.264719-709-224552253274645/.source.json _original_basename=.77kb6_ld follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:53 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:53 localhost systemd[1]: var-lib-containers-storage-overlay-e812fa34defdc78ec5fb2b77011468829f7a2881cc57f804b1a422dc9e19278a-merged.mount: Deactivated successfully. Feb 23 04:33:53 localhost systemd[1]: var-lib-containers-storage-overlay-e812fa34defdc78ec5fb2b77011468829f7a2881cc57f804b1a422dc9e19278a-merged.mount: Deactivated successfully. Feb 23 04:33:54 localhost nova_compute[229873]: 2026-02-23 09:33:54.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:33:54 localhost nova_compute[229873]: 2026-02-23 09:33:54.182 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:33:54 localhost nova_compute[229873]: 2026-02-23 09:33:54.182 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:33:54 localhost python3.9[254504]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:33:54 localhost nova_compute[229873]: 2026-02-23 09:33:54.603 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:33:54 localhost nova_compute[229873]: 2026-02-23 09:33:54.603 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:33:54 localhost nova_compute[229873]: 2026-02-23 09:33:54.604 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:33:54 localhost nova_compute[229873]: 2026-02-23 09:33:54.604 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:33:54 localhost nova_compute[229873]: 2026-02-23 09:33:54.604 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:33:54 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 23 04:33:54 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:54 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.081 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.267 229877 WARNING nova.virt.libvirt.driver [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.268 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=13139MB free_disk=41.83688735961914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.269 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.269 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.335 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.336 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.362 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:33:55 localhost podman[241086]: @ - - [23/Feb/2026:09:29:34 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 141014 "" "Go-http-client/1.1" Feb 23 04:33:55 localhost podman_exporter[241075]: ts=2026-02-23T09:33:55.508Z caller=exporter.go:96 level=info msg="Listening on" address=:9882 Feb 23 04:33:55 localhost podman_exporter[241075]: ts=2026-02-23T09:33:55.509Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882 Feb 23 04:33:55 localhost podman_exporter[241075]: ts=2026-02-23T09:33:55.509Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882 Feb 23 04:33:55 localhost systemd[1]: var-lib-containers-storage-overlay-60721aa364e403c543e527da873a8fedfe8e0f7c00041191b7b665e76b6a089a-merged.mount: Deactivated successfully. Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.796 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.802 229877 DEBUG nova.compute.provider_tree [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.823 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.826 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:33:55 localhost nova_compute[229873]: 2026-02-23 09:33:55.826 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:33:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:33:55 localhost podman[254762]: 2026-02-23 09:33:55.975051831 +0000 UTC m=+0.047861394 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:33:55 localhost podman[254762]: 2026-02-23 09:33:55.984845703 +0000 UTC m=+0.057655296 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:33:55 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:33:56 localhost python3.9[254877]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False Feb 23 04:33:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8920 DF PROTO=TCP SPT=43010 DPT=9102 SEQ=2316128952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012855840000000001030307) Feb 23 04:33:57 localhost sshd[254988]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:33:57 localhost python3.9[254987]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 23 04:33:57 localhost nova_compute[229873]: 2026-02-23 09:33:57.826 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:33:57 localhost nova_compute[229873]: 2026-02-23 09:33:57.827 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:33:57 localhost nova_compute[229873]: 2026-02-23 09:33:57.854 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:33:57 localhost nova_compute[229873]: 2026-02-23 09:33:57.854 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:33:58 localhost nova_compute[229873]: 2026-02-23 09:33:58.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:33:58 localhost nova_compute[229873]: 2026-02-23 09:33:58.181 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:33:58 localhost nova_compute[229873]: 2026-02-23 09:33:58.181 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:33:58 localhost nova_compute[229873]: 2026-02-23 09:33:58.211 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:33:58 localhost nova_compute[229873]: 2026-02-23 09:33:58.211 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:33:58 localhost nova_compute[229873]: 2026-02-23 09:33:58.212 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:33:58 localhost nova_compute[229873]: 2026-02-23 09:33:58.213 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:33:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:33:58 localhost podman[255100]: 2026-02-23 09:33:58.645677687 +0000 UTC m=+0.083140514 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., release=1770267347, config_id=openstack_network_exporter, version=9.7) Feb 23 04:33:58 localhost podman[255100]: 2026-02-23 09:33:58.687897947 +0000 UTC m=+0.125360744 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.expose-services=, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 23 04:33:58 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:33:58 localhost python3[255099]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json containers=['neutron_sriov_agent'] log_base_path=/var/log/containers/stdouts debug=False Feb 23 04:33:59 localhost podman[255156]: Feb 23 04:33:59 localhost podman[255156]: 2026-02-23 09:33:59.122189734 +0000 UTC m=+0.093641636 container create dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, config_id=neutron_sriov_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:33:59 localhost podman[255156]: 2026-02-23 09:33:59.074210237 +0000 UTC m=+0.045662149 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Feb 23 04:33:59 localhost python3[255099]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Feb 23 04:34:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:34:01 localhost openstack_network_exporter[243519]: ERROR 09:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:34:01 localhost openstack_network_exporter[243519]: Feb 23 04:34:01 localhost openstack_network_exporter[243519]: ERROR 09:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:34:01 localhost openstack_network_exporter[243519]: Feb 23 04:34:02 localhost podman[255213]: 2026-02-23 09:34:02.00394468 +0000 UTC m=+0.081076033 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 04:34:02 localhost podman[255213]: 2026-02-23 09:34:02.042820463 +0000 UTC m=+0.119951896 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller) Feb 23 04:34:02 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:34:07 localhost python3.9[255328]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:34:10 localhost python3.9[255440]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:34:11 localhost python3.9[255495]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:34:11 localhost python3.9[255604]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771839251.3209713-943-105194542938843/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:34:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22292 DF PROTO=TCP SPT=43542 DPT=9102 SEQ=4253123598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01288E960000000001030307) Feb 23 04:34:12 localhost python3.9[255659]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:34:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:34:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:34:12 localhost systemd[1]: Reloading. Feb 23 04:34:12 localhost podman[241086]: time="2026-02-23T09:34:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:34:12 localhost systemd-sysv-generator[255710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:34:12 localhost systemd-rc-local-generator[255705]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:34:12 localhost podman[255661]: 2026-02-23 09:34:12.776623611 +0000 UTC m=+0.145207596 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent) Feb 23 04:34:12 localhost podman[255661]: 2026-02-23 09:34:12.783658534 +0000 UTC m=+0.152242519 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 04:34:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:12 localhost podman[255662]: 2026-02-23 09:34:12.744449682 +0000 UTC m=+0.109167885 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 23 04:34:12 localhost podman[241086]: @ - - [23/Feb/2026:09:34:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144998 "" "Go-http-client/1.1" Feb 23 04:34:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:34:12 localhost podman[241086]: @ - - [23/Feb/2026:09:34:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15749 "" "Go-http-client/1.1" Feb 23 04:34:12 localhost podman[255662]: 2026-02-23 09:34:12.827694647 +0000 UTC m=+0.192412850 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute) Feb 23 04:34:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:12 localhost systemd[1]: tmp-crun.hLoa6l.mount: Deactivated successfully. Feb 23 04:34:12 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:34:12 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:34:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22293 DF PROTO=TCP SPT=43542 DPT=9102 SEQ=4253123598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012892830000000001030307) Feb 23 04:34:13 localhost python3.9[255786]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:34:13 localhost systemd[1]: Reloading. Feb 23 04:34:13 localhost systemd-sysv-generator[255819]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:34:13 localhost systemd-rc-local-generator[255816]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:34:13 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:13 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:13 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:13 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:34:13 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:13 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:13 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:13 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:34:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8921 DF PROTO=TCP SPT=43010 DPT=9102 SEQ=2316128952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012895830000000001030307) Feb 23 04:34:14 localhost systemd[1]: Starting neutron_sriov_agent container... Feb 23 04:34:14 localhost systemd[1]: Started libcrun container. Feb 23 04:34:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9132601c6feb5560131ce509af866b823c03630495aec098688a4689b0a2be4/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 23 04:34:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9132601c6feb5560131ce509af866b823c03630495aec098688a4689b0a2be4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:34:14 localhost podman[255827]: 2026-02-23 09:34:14.177841174 +0000 UTC m=+0.134639662 container init dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=neutron_sriov_agent, io.buildah.version=1.43.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}) Feb 23 04:34:14 localhost podman[255827]: 2026-02-23 09:34:14.189409198 +0000 UTC m=+0.146207676 container start dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=neutron_sriov_agent, io.buildah.version=1.43.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:34:14 localhost podman[255827]: neutron_sriov_agent Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: + sudo -E kolla_set_configs Feb 23 04:34:14 localhost systemd[1]: Started neutron_sriov_agent container. Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Validating config file Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Copying service configuration files Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Writing out command to execute Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: ++ cat /run_command Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: + CMD=/usr/bin/neutron-sriov-nic-agent Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: + ARGS= Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: + sudo kolla_copy_cacerts Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: + [[ ! -n '' ]] Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: + . kolla_extend_start Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: Running command: '/usr/bin/neutron-sriov-nic-agent' Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: + umask 0022 Feb 23 04:34:14 localhost neutron_sriov_agent[255841]: + exec /usr/bin/neutron-sriov-nic-agent Feb 23 04:34:15 localhost python3.9[255963]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 23 04:34:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22294 DF PROTO=TCP SPT=43542 DPT=9102 SEQ=4253123598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01289A830000000001030307) Feb 23 04:34:15 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:15.854 2 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 23 04:34:15 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:15.855 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev44#033[00m Feb 23 04:34:15 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:15.855 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Feb 23 04:34:15 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:15.855 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Feb 23 04:34:15 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:15.855 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Feb 23 04:34:15 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:15.855 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Feb 23 04:34:15 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:15.855 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005626465.localdomain'}#033[00m Feb 23 04:34:15 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:15.856 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-dbb70655-1ddf-4cdf-a4d6-54190e82ded4 - - - - - -] RPC agent_id: nic-switch-agent.np0005626465.localdomain#033[00m Feb 23 04:34:15 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:15.861 2 INFO neutron.agent.agent_extensions_manager [None req-dbb70655-1ddf-4cdf-a4d6-54190e82ded4 - - - - - -] Loaded agent extensions: ['qos']#033[00m Feb 23 04:34:15 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:15.861 2 INFO neutron.agent.agent_extensions_manager [None req-dbb70655-1ddf-4cdf-a4d6-54190e82ded4 - - - - - -] Initializing agent extension 'qos'#033[00m Feb 23 04:34:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23848 DF PROTO=TCP SPT=56888 DPT=9102 SEQ=2509792483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01289D830000000001030307) Feb 23 04:34:16 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:16.148 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-dbb70655-1ddf-4cdf-a4d6-54190e82ded4 - - - - - -] Agent initialized successfully, now running... #033[00m Feb 23 04:34:16 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:16.149 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-dbb70655-1ddf-4cdf-a4d6-54190e82ded4 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Feb 23 04:34:16 localhost neutron_sriov_agent[255841]: 2026-02-23 09:34:16.149 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-dbb70655-1ddf-4cdf-a4d6-54190e82ded4 - - - - - -] Agent out of sync with plugin!#033[00m Feb 23 04:34:16 localhost python3.9[256074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:34:18 localhost systemd[1]: tmp-crun.wu5vMm.mount: Deactivated successfully. Feb 23 04:34:18 localhost podman[256162]: 2026-02-23 09:34:18.048224672 +0000 UTC m=+0.100954628 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:34:18 localhost podman[256162]: 2026-02-23 09:34:18.054275016 +0000 UTC m=+0.107005002 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:34:18 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:34:18 localhost python3.9[256174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771839255.776174-1078-82901701046581/.source.yaml _original_basename=.jd2ard7y follow=False checksum=b7da0e0729778df3fa2e9c064ac77c610bc22800 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:34:18 localhost python3.9[256298]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:34:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22295 DF PROTO=TCP SPT=43542 DPT=9102 SEQ=4253123598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0128AA430000000001030307) Feb 23 04:34:20 localhost systemd[1]: Stopping neutron_sriov_agent container... Feb 23 04:34:20 localhost systemd[1]: libpod-dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275.scope: Deactivated successfully. Feb 23 04:34:20 localhost systemd[1]: libpod-dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275.scope: Consumed 1.762s CPU time. Feb 23 04:34:20 localhost podman[256302]: 2026-02-23 09:34:20.155232264 +0000 UTC m=+0.095839929 container died dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=neutron_sriov_agent, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=neutron_sriov_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 23 04:34:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275-userdata-shm.mount: Deactivated successfully. Feb 23 04:34:20 localhost systemd[1]: var-lib-containers-storage-overlay-c9132601c6feb5560131ce509af866b823c03630495aec098688a4689b0a2be4-merged.mount: Deactivated successfully. Feb 23 04:34:20 localhost podman[256302]: 2026-02-23 09:34:20.233945198 +0000 UTC m=+0.174552843 container cleanup dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=neutron_sriov_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS) Feb 23 04:34:20 localhost podman[256302]: neutron_sriov_agent Feb 23 04:34:20 localhost podman[256315]: 2026-02-23 09:34:20.237530463 +0000 UTC m=+0.082116594 container cleanup dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, container_name=neutron_sriov_agent, tcib_managed=true, config_id=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:34:20 localhost podman[256329]: 2026-02-23 09:34:20.326120092 +0000 UTC m=+0.058665536 container cleanup dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 04:34:20 localhost podman[256329]: neutron_sriov_agent Feb 23 04:34:20 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully. Feb 23 04:34:20 localhost systemd[1]: Stopped neutron_sriov_agent container. Feb 23 04:34:20 localhost systemd[1]: Starting neutron_sriov_agent container... Feb 23 04:34:20 localhost systemd[1]: Started libcrun container. Feb 23 04:34:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9132601c6feb5560131ce509af866b823c03630495aec098688a4689b0a2be4/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 23 04:34:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9132601c6feb5560131ce509af866b823c03630495aec098688a4689b0a2be4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:34:20 localhost podman[256340]: 2026-02-23 09:34:20.487051972 +0000 UTC m=+0.125905159 container init dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_sriov_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}) Feb 23 04:34:20 localhost podman[256340]: 2026-02-23 09:34:20.49669947 +0000 UTC m=+0.135552647 container start dacdcaf0cdb925b46116735fa643ae4ef1948913c1eaa09e6dd2c4f091495275 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-930b0f453b38712c4bc31a261a083d90ada419fcf4f1dd74a71e132a91137ebe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:34:20 localhost podman[256340]: neutron_sriov_agent Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: + sudo -E kolla_set_configs Feb 23 04:34:20 localhost systemd[1]: Started neutron_sriov_agent container. Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Validating config file Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Copying service configuration files Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Writing out command to execute Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/c4d69ddbf6f4a149b7e6d31d28f2dc1fe1c08d98a601f027e6d63209aefe8011 Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: ++ cat /run_command Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: + CMD=/usr/bin/neutron-sriov-nic-agent Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: + ARGS= Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: + sudo kolla_copy_cacerts Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: + [[ ! -n '' ]] Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: + . kolla_extend_start Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: Running command: '/usr/bin/neutron-sriov-nic-agent' Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: + umask 0022 Feb 23 04:34:20 localhost neutron_sriov_agent[256355]: + exec /usr/bin/neutron-sriov-nic-agent Feb 23 04:34:20 localhost systemd[1]: session-56.scope: Deactivated successfully. Feb 23 04:34:20 localhost systemd-logind[759]: Session 56 logged out. Waiting for processes to exit. Feb 23 04:34:20 localhost systemd[1]: session-56.scope: Consumed 22.753s CPU time. Feb 23 04:34:20 localhost systemd-logind[759]: Removed session 56. Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.144 2 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.144 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev44#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.144 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.144 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.145 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.145 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.145 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005626465.localdomain'}#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.145 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-fb73754e-d365-4153-ab6e-8fdee7aa20d8 - - - - - -] RPC agent_id: nic-switch-agent.np0005626465.localdomain#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.150 2 INFO neutron.agent.agent_extensions_manager [None req-fb73754e-d365-4153-ab6e-8fdee7aa20d8 - - - - - -] Loaded agent extensions: ['qos']#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.150 2 INFO neutron.agent.agent_extensions_manager [None req-fb73754e-d365-4153-ab6e-8fdee7aa20d8 - - - - - -] Initializing agent extension 'qos'#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.272 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-fb73754e-d365-4153-ab6e-8fdee7aa20d8 - - - - - -] Agent initialized successfully, now running... #033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.273 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-fb73754e-d365-4153-ab6e-8fdee7aa20d8 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Feb 23 04:34:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:34:22.273 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-fb73754e-d365-4153-ab6e-8fdee7aa20d8 - - - - - -] Agent out of sync with plugin!#033[00m Feb 23 04:34:26 localhost sshd[256388]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:34:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:34:27 localhost systemd[1]: tmp-crun.AMdoic.mount: Deactivated successfully. Feb 23 04:34:27 localhost podman[256390]: 2026-02-23 09:34:27.018730358 +0000 UTC m=+0.091728891 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:34:27 localhost systemd-logind[759]: New session 57 of user zuul. Feb 23 04:34:27 localhost systemd[1]: Started Session 57 of User zuul. Feb 23 04:34:27 localhost podman[256390]: 2026-02-23 09:34:27.081266385 +0000 UTC m=+0.154264958 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:34:27 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:34:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22296 DF PROTO=TCP SPT=43542 DPT=9102 SEQ=4253123598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0128CB830000000001030307) Feb 23 04:34:28 localhost python3.9[256522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:34:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:34:29 localhost podman[256586]: 2026-02-23 09:34:29.022063525 +0000 UTC m=+0.084215924 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, name=ubi9/ubi-minimal, io.openshift.expose-services=) Feb 23 04:34:29 localhost podman[256586]: 2026-02-23 09:34:29.037890703 +0000 UTC m=+0.100043142 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, name=ubi9/ubi-minimal, release=1770267347, version=9.7, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 23 04:34:29 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:34:29 localhost python3.9[256656]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:34:30 localhost python3.9[256719]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:34:31 localhost openstack_network_exporter[243519]: ERROR 09:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:34:31 localhost openstack_network_exporter[243519]: Feb 23 04:34:31 localhost openstack_network_exporter[243519]: ERROR 09:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:34:31 localhost openstack_network_exporter[243519]: Feb 23 04:34:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:34:33 localhost systemd[1]: tmp-crun.D0huB4.mount: Deactivated successfully. Feb 23 04:34:33 localhost podman[256722]: 2026-02-23 09:34:33.022637875 +0000 UTC m=+0.096379985 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Feb 23 04:34:33 localhost podman[256722]: 2026-02-23 09:34:33.125869307 +0000 UTC m=+0.199611437 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller) Feb 23 04:34:33 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:34:35 localhost python3.9[256856]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 23 04:34:37 localhost python3.9[256969]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/container-startup-config setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:37 localhost python3.9[257079]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:38 localhost python3.9[257189]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:38 localhost python3.9[257299]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:39 localhost python3.9[257409]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:40 localhost python3.9[257519]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:40 localhost python3.9[257629]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:41 localhost sshd[257740]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:34:41 localhost python3.9[257739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/container-startup-config/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52660 DF PROTO=TCP SPT=60754 DPT=9102 SEQ=234241757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012903C60000000001030307) Feb 23 04:34:42 localhost python3.9[257829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/container-startup-config/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839281.0601885-274-40101611794580/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=472c5e922ae22c8bdcaef73d1ca73ce5597b440e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:42 localhost podman[241086]: time="2026-02-23T09:34:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:34:42 localhost podman[241086]: @ - - [23/Feb/2026:09:34:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144997 "" "Go-http-client/1.1" Feb 23 04:34:42 localhost podman[241086]: @ - - [23/Feb/2026:09:34:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15869 "" "Go-http-client/1.1" Feb 23 04:34:43 localhost python3.9[257940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52661 DF PROTO=TCP SPT=60754 DPT=9102 SEQ=234241757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012907C30000000001030307) Feb 23 04:34:43 localhost python3.9[258026]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839282.5902934-319-27874646184178/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:34:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:34:44 localhost python3.9[258134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:44 localhost podman[258136]: 2026-02-23 09:34:44.017067455 +0000 UTC m=+0.083804295 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Feb 23 04:34:44 localhost podman[258136]: 2026-02-23 09:34:44.032835144 +0000 UTC m=+0.099571954 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216) Feb 23 04:34:44 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:34:44 localhost podman[258135]: 2026-02-23 09:34:44.159927915 +0000 UTC m=+0.228419035 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:34:44 localhost podman[258135]: 2026-02-23 09:34:44.1649242 +0000 UTC m=+0.233415270 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:34:44 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:34:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22297 DF PROTO=TCP SPT=43542 DPT=9102 SEQ=4253123598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01290B830000000001030307) Feb 23 04:34:44 localhost python3.9[258256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839283.5896626-319-43132205508696/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:45 localhost python3.9[258364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52662 DF PROTO=TCP SPT=60754 DPT=9102 SEQ=234241757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01290FC40000000001030307) Feb 23 04:34:45 localhost python3.9[258450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839284.6545508-319-19623370773529/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=b71acc5dd097620057285a48375ec62ad088f4f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8922 DF PROTO=TCP SPT=43010 DPT=9102 SEQ=2316128952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012913830000000001030307) Feb 23 04:34:46 localhost python3.9[258558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:47 localhost python3.9[258644]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839286.3791726-493-239661075304673/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=d10c6f671263070bdc94fed977552f121764373c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:48 localhost python3.9[258752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:34:48.287 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:34:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:34:48.288 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:34:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:34:48.288 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:34:48 localhost python3.9[258838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839287.5345147-538-153355218020344/.source follow=False _original_basename=haproxy.j2 checksum=eddfecb822bb60e7241db0fd719c7552d2d25452 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:34:49 localhost systemd[1]: tmp-crun.lFOWBm.mount: Deactivated successfully. Feb 23 04:34:49 localhost podman[258947]: 2026-02-23 09:34:49.016076472 +0000 UTC m=+0.090387027 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:34:49 localhost podman[258947]: 2026-02-23 09:34:49.027834903 +0000 UTC m=+0.102145458 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:34:49 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:34:49 localhost python3.9[258946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52663 DF PROTO=TCP SPT=60754 DPT=9102 SEQ=234241757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01291F840000000001030307) Feb 23 04:34:49 localhost python3.9[259053]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839288.621256-538-141167436248893/.source follow=False _original_basename=dnsmasq.j2 checksum=a6b8b2fb47e7419d250eaee9e3565b13fff8f42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:50 localhost python3.9[259161]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:50 localhost python3.9[259216]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:51 localhost python3.9[259324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:52 localhost python3.9[259410]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839290.8590348-625-204090858948150/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:53 localhost python3.9[259518]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:34:53 localhost python3.9[259721]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:53 localhost systemd[1]: tmp-crun.k8VV5o.mount: Deactivated successfully. Feb 23 04:34:53 localhost podman[259738]: 2026-02-23 09:34:53.812983626 +0000 UTC m=+0.099843651 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, distribution-scope=public, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 04:34:53 localhost podman[259738]: 2026-02-23 09:34:53.918700827 +0000 UTC m=+0.205560822 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:34:54 localhost nova_compute[229873]: 2026-02-23 09:34:54.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:34:54 localhost python3.9[259932]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:54 localhost python3.9[260021]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.214 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.215 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.215 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.216 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.217 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:34:55 localhost python3.9[260161]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.678 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.895 229877 WARNING nova.virt.libvirt.driver [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.898 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=13016MB free_disk=41.83688735961914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.898 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:34:55 localhost nova_compute[229873]: 2026-02-23 09:34:55.899 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:34:55 localhost python3.9[260247]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:34:56 localhost nova_compute[229873]: 2026-02-23 09:34:56.018 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:34:56 localhost nova_compute[229873]: 2026-02-23 09:34:56.019 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:34:56 localhost nova_compute[229873]: 2026-02-23 09:34:56.032 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.086 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:34:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:34:56 localhost nova_compute[229873]: 2026-02-23 09:34:56.488 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:34:56 localhost nova_compute[229873]: 2026-02-23 09:34:56.495 229877 DEBUG nova.compute.provider_tree [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:34:56 localhost nova_compute[229873]: 2026-02-23 09:34:56.529 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:34:56 localhost nova_compute[229873]: 2026-02-23 09:34:56.532 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:34:56 localhost nova_compute[229873]: 2026-02-23 09:34:56.533 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:34:56 localhost python3.9[260379]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:34:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:34:57 localhost podman[260490]: 2026-02-23 09:34:57.408240536 +0000 UTC m=+0.089569783 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:34:57 localhost podman[260490]: 2026-02-23 09:34:57.416668481 +0000 UTC m=+0.097997738 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:34:57 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:34:57 localhost python3.9[260489]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:57 localhost nova_compute[229873]: 2026-02-23 09:34:57.534 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:34:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52664 DF PROTO=TCP SPT=60754 DPT=9102 SEQ=234241757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01293F830000000001030307) Feb 23 04:34:57 localhost python3.9[260569]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:34:58 localhost nova_compute[229873]: 2026-02-23 09:34:58.177 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:34:58 localhost nova_compute[229873]: 2026-02-23 09:34:58.180 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:34:58 localhost nova_compute[229873]: 2026-02-23 09:34:58.180 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:34:58 localhost nova_compute[229873]: 2026-02-23 09:34:58.181 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:34:58 localhost nova_compute[229873]: 2026-02-23 09:34:58.200 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:34:58 localhost nova_compute[229873]: 2026-02-23 09:34:58.200 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:34:58 localhost nova_compute[229873]: 2026-02-23 09:34:58.201 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:34:59 localhost python3.9[260679]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:34:59 localhost nova_compute[229873]: 2026-02-23 09:34:59.180 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:34:59 localhost nova_compute[229873]: 2026-02-23 09:34:59.181 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:34:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:34:59 localhost systemd[1]: tmp-crun.48XFli.mount: Deactivated successfully. Feb 23 04:34:59 localhost podman[260737]: 2026-02-23 09:34:59.439412015 +0000 UTC m=+0.071494787 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, version=9.7, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git) Feb 23 04:34:59 localhost podman[260737]: 2026-02-23 09:34:59.452341491 +0000 UTC m=+0.084424283 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_id=openstack_network_exporter, io.buildah.version=1.33.7, version=9.7, io.openshift.tags=minimal rhel9, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal) Feb 23 04:34:59 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:34:59 localhost python3.9[260736]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:35:00 localhost nova_compute[229873]: 2026-02-23 09:35:00.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:35:01 localhost python3.9[260866]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:35:01 localhost systemd[1]: Reloading. Feb 23 04:35:01 localhost systemd-sysv-generator[260897]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:35:01 localhost systemd-rc-local-generator[260894]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:35:01 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:01 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:01 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:01 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:35:01 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:01 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:01 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:01 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:01 localhost openstack_network_exporter[243519]: ERROR 09:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:35:01 localhost openstack_network_exporter[243519]: Feb 23 04:35:01 localhost openstack_network_exporter[243519]: ERROR 09:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:35:01 localhost openstack_network_exporter[243519]: Feb 23 04:35:02 localhost python3.9[261014]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:35:02 localhost python3.9[261071]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:35:03 localhost python3.9[261181]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:35:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:35:03 localhost podman[261239]: 2026-02-23 09:35:03.662495212 +0000 UTC m=+0.091634113 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 23 04:35:03 localhost podman[261239]: 2026-02-23 09:35:03.741774824 +0000 UTC m=+0.170913715 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2) Feb 23 04:35:03 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:35:03 localhost python3.9[261238]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:35:04 localhost python3.9[261372]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:35:04 localhost systemd[1]: Reloading. Feb 23 04:35:04 localhost systemd-rc-local-generator[261394]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:35:04 localhost systemd-sysv-generator[261399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:35:04 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:04 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:04 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:04 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:35:04 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:04 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:04 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:04 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:04 localhost systemd[1]: Starting Create netns directory... Feb 23 04:35:04 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 23 04:35:04 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 23 04:35:04 localhost systemd[1]: Finished Create netns directory. Feb 23 04:35:06 localhost python3.9[261524]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:35:07 localhost python3.9[261634]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:35:08 localhost python3.9[261744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:35:08 localhost python3.9[261832]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771839307.8549867-1093-48994350826873/.source.json _original_basename=.42ha53w3 follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:35:09 localhost python3.9[261940]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:35:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63052 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=1369598627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012978F50000000001030307) Feb 23 04:35:12 localhost podman[241086]: time="2026-02-23T09:35:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:35:12 localhost podman[241086]: @ - - [23/Feb/2026:09:35:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144997 "" "Go-http-client/1.1" Feb 23 04:35:12 localhost podman[241086]: @ - - [23/Feb/2026:09:35:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15879 "" "Go-http-client/1.1" Feb 23 04:35:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63053 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=1369598627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01297D030000000001030307) Feb 23 04:35:13 localhost python3.9[262244]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False Feb 23 04:35:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52665 DF PROTO=TCP SPT=60754 DPT=9102 SEQ=234241757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01297F830000000001030307) Feb 23 04:35:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:35:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:35:15 localhost podman[262334]: 2026-02-23 09:35:15.011896725 +0000 UTC m=+0.077957135 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 04:35:15 localhost podman[262334]: 2026-02-23 09:35:15.021314139 +0000 UTC m=+0.087374559 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:35:15 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:35:15 localhost podman[262333]: 2026-02-23 09:35:15.113121855 +0000 UTC m=+0.181933695 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0) Feb 23 04:35:15 localhost podman[262333]: 2026-02-23 09:35:15.122790907 +0000 UTC m=+0.191602737 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:35:15 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:35:15 localhost python3.9[262379]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 23 04:35:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63054 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=1369598627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012985040000000001030307) Feb 23 04:35:16 localhost python3[262500]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json containers=['neutron_dhcp_agent'] log_base_path=/var/log/containers/stdouts debug=False Feb 23 04:35:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22298 DF PROTO=TCP SPT=43542 DPT=9102 SEQ=4253123598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012989830000000001030307) Feb 23 04:35:16 localhost podman[262536]: Feb 23 04:35:16 localhost podman[262536]: 2026-02-23 09:35:16.662909592 +0000 UTC m=+0.088566674 container create 464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-ae1ae1a73aa32d9bc1a972921bd6d42ef381568e9d9cd48b84e6032d05c724eb'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=neutron_dhcp, tcib_managed=true, org.label-schema.build-date=20260216, container_name=neutron_dhcp_agent, managed_by=edpm_ansible) Feb 23 04:35:16 localhost podman[262536]: 2026-02-23 09:35:16.61841341 +0000 UTC m=+0.044070552 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:35:16 localhost python3[262500]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-ae1ae1a73aa32d9bc1a972921bd6d42ef381568e9d9cd48b84e6032d05c724eb --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-ae1ae1a73aa32d9bc1a972921bd6d42ef381568e9d9cd48b84e6032d05c724eb'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:35:17 localhost python3.9[262681]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:35:18 localhost python3.9[262793]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:35:18 localhost python3.9[262848]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:35:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:35:19 localhost podman[262958]: 2026-02-23 09:35:19.303915734 +0000 UTC m=+0.072222928 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:35:19 localhost podman[262958]: 2026-02-23 09:35:19.318804088 +0000 UTC m=+0.087111282 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:35:19 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:35:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63055 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=1369598627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012994C40000000001030307) Feb 23 04:35:19 localhost python3.9[262957]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771839318.7768776-1327-50496638384442/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:35:19 localhost sshd[263014]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:35:20 localhost python3.9[263037]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:35:20 localhost systemd[1]: Reloading. Feb 23 04:35:20 localhost systemd-rc-local-generator[263060]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:35:20 localhost systemd-sysv-generator[263065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:21 localhost python3.9[263128]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:35:21 localhost systemd[1]: Reloading. Feb 23 04:35:21 localhost systemd-sysv-generator[263157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:35:21 localhost systemd-rc-local-generator[263153]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:35:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:35:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:35:21 localhost systemd[1]: Starting neutron_dhcp_agent container... Feb 23 04:35:21 localhost systemd[1]: Started libcrun container. Feb 23 04:35:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdffbdf5ee9cfc54f47f739501e264b9654384a36d106436e4b946036981d5cb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 23 04:35:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdffbdf5ee9cfc54f47f739501e264b9654384a36d106436e4b946036981d5cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:35:21 localhost podman[263169]: 2026-02-23 09:35:21.612229964 +0000 UTC m=+0.128369160 container init 464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=neutron_dhcp_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, config_id=neutron_dhcp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-ae1ae1a73aa32d9bc1a972921bd6d42ef381568e9d9cd48b84e6032d05c724eb'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible) Feb 23 04:35:21 localhost podman[263169]: 2026-02-23 09:35:21.622270566 +0000 UTC m=+0.138409762 container start 464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-ae1ae1a73aa32d9bc1a972921bd6d42ef381568e9d9cd48b84e6032d05c724eb'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_dhcp, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0) Feb 23 04:35:21 localhost podman[263169]: neutron_dhcp_agent Feb 23 04:35:21 localhost systemd[1]: Started neutron_dhcp_agent container. Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: + sudo -E kolla_set_configs Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Validating config file Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Copying service configuration files Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Writing out command to execute Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/c4d69ddbf6f4a149b7e6d31d28f2dc1fe1c08d98a601f027e6d63209aefe8011 Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: ++ cat /run_command Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: + CMD=/usr/bin/neutron-dhcp-agent Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: + ARGS= Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: + sudo kolla_copy_cacerts Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: + [[ ! -n '' ]] Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: + . kolla_extend_start Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: Running command: '/usr/bin/neutron-dhcp-agent' Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: + umask 0022 Feb 23 04:35:21 localhost neutron_dhcp_agent[263183]: + exec /usr/bin/neutron-dhcp-agent Feb 23 04:35:22 localhost python3.9[263305]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 23 04:35:23 localhost neutron_dhcp_agent[263183]: 2026-02-23 09:35:23.018 263187 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 23 04:35:23 localhost neutron_dhcp_agent[263183]: 2026-02-23 09:35:23.018 263187 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev44#033[00m Feb 23 04:35:23 localhost neutron_dhcp_agent[263183]: 2026-02-23 09:35:23.449 263187 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Feb 23 04:35:23 localhost neutron_dhcp_agent[263183]: 2026-02-23 09:35:23.542 263187 INFO neutron.agent.dhcp.agent [None req-278f7e1f-8c10-48c1-8b96-f5e137e67e32 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 23 04:35:23 localhost neutron_dhcp_agent[263183]: 2026-02-23 09:35:23.542 263187 INFO neutron.agent.dhcp.agent [None req-278f7e1f-8c10-48c1-8b96-f5e137e67e32 - - - - - -] Synchronizing state complete#033[00m Feb 23 04:35:23 localhost python3.9[263415]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:35:23 localhost neutron_dhcp_agent[263183]: 2026-02-23 09:35:23.604 263187 INFO neutron.agent.dhcp.agent [None req-278f7e1f-8c10-48c1-8b96-f5e137e67e32 - - - - - -] DHCP agent started#033[00m Feb 23 04:35:24 localhost python3.9[263506]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771839323.151188-1462-140165794183729/.source.yaml _original_basename=.9xc0qs1a follow=False checksum=032f1f7e8199faa0c01f5405a803b0de94087c3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:35:24 localhost ovn_metadata_agent[161837]: 2026-02-23 09:35:24.289 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:35:24 localhost ovn_metadata_agent[161837]: 2026-02-23 09:35:24.290 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:35:24 localhost ovn_metadata_agent[161837]: 2026-02-23 09:35:24.292 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:35:24 localhost python3.9[263616]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:35:25 localhost systemd[1]: Stopping neutron_dhcp_agent container... Feb 23 04:35:25 localhost neutron_dhcp_agent[263183]: 2026-02-23 09:35:25.674 263187 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Feb 23 04:35:26 localhost systemd[1]: libpod-464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27.scope: Deactivated successfully. Feb 23 04:35:26 localhost systemd[1]: libpod-464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27.scope: Consumed 2.273s CPU time. Feb 23 04:35:26 localhost podman[263620]: 2026-02-23 09:35:26.087039584 +0000 UTC m=+1.030318479 container died 464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-ae1ae1a73aa32d9bc1a972921bd6d42ef381568e9d9cd48b84e6032d05c724eb'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0) Feb 23 04:35:26 localhost systemd[1]: tmp-crun.rprvWs.mount: Deactivated successfully. Feb 23 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27-userdata-shm.mount: Deactivated successfully. Feb 23 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-bdffbdf5ee9cfc54f47f739501e264b9654384a36d106436e4b946036981d5cb-merged.mount: Deactivated successfully. Feb 23 04:35:26 localhost podman[263620]: 2026-02-23 09:35:26.140305191 +0000 UTC m=+1.083584066 container cleanup 464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=neutron_dhcp, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-ae1ae1a73aa32d9bc1a972921bd6d42ef381568e9d9cd48b84e6032d05c724eb'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent) Feb 23 04:35:26 localhost podman[263620]: neutron_dhcp_agent Feb 23 04:35:26 localhost podman[263659]: error opening file `/run/crun/464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27/status`: No such file or directory Feb 23 04:35:26 localhost podman[263648]: 2026-02-23 09:35:26.238331518 +0000 UTC m=+0.068636545 container cleanup 464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260216, container_name=neutron_dhcp_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-ae1ae1a73aa32d9bc1a972921bd6d42ef381568e9d9cd48b84e6032d05c724eb'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}) Feb 23 04:35:26 localhost podman[263648]: neutron_dhcp_agent Feb 23 04:35:26 localhost systemd[1]: tmp-crun.n75MAN.mount: Deactivated successfully. Feb 23 04:35:26 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully. Feb 23 04:35:26 localhost systemd[1]: Stopped neutron_dhcp_agent container. Feb 23 04:35:26 localhost systemd[1]: Starting neutron_dhcp_agent container... Feb 23 04:35:26 localhost systemd[1]: Started libcrun container. Feb 23 04:35:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdffbdf5ee9cfc54f47f739501e264b9654384a36d106436e4b946036981d5cb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 23 04:35:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdffbdf5ee9cfc54f47f739501e264b9654384a36d106436e4b946036981d5cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:35:26 localhost podman[263661]: 2026-02-23 09:35:26.381508177 +0000 UTC m=+0.111004446 container init 464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=neutron_dhcp_agent, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-ae1ae1a73aa32d9bc1a972921bd6d42ef381568e9d9cd48b84e6032d05c724eb'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216) Feb 23 04:35:26 localhost podman[263661]: 2026-02-23 09:35:26.390956191 +0000 UTC m=+0.120452460 container start 464f642ccef9237f0566d604950b9347cecb493f3dd4773f36403d29545c8b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-ae1ae1a73aa32d9bc1a972921bd6d42ef381568e9d9cd48b84e6032d05c724eb'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.build-date=20260216) Feb 23 04:35:26 localhost podman[263661]: neutron_dhcp_agent Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: + sudo -E kolla_set_configs Feb 23 04:35:26 localhost systemd[1]: Started neutron_dhcp_agent container. Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Validating config file Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Copying service configuration files Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Writing out command to execute Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/c4d69ddbf6f4a149b7e6d31d28f2dc1fe1c08d98a601f027e6d63209aefe8011 Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: ++ cat /run_command Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: + CMD=/usr/bin/neutron-dhcp-agent Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: + ARGS= Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: + sudo kolla_copy_cacerts Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: + [[ ! -n '' ]] Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: + . kolla_extend_start Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: Running command: '/usr/bin/neutron-dhcp-agent' Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: + umask 0022 Feb 23 04:35:26 localhost neutron_dhcp_agent[263675]: + exec /usr/bin/neutron-dhcp-agent Feb 23 04:35:27 localhost systemd[1]: session-57.scope: Deactivated successfully. Feb 23 04:35:27 localhost systemd[1]: session-57.scope: Consumed 34.856s CPU time. Feb 23 04:35:27 localhost systemd-logind[759]: Session 57 logged out. Waiting for processes to exit. Feb 23 04:35:27 localhost systemd-logind[759]: Removed session 57. Feb 23 04:35:27 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:35:27.724 263679 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 23 04:35:27 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:35:27.724 263679 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev44#033[00m Feb 23 04:35:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63056 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=1369598627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0129B5830000000001030307) Feb 23 04:35:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:35:27 localhost systemd[1]: tmp-crun.OWNi2K.mount: Deactivated successfully. Feb 23 04:35:27 localhost podman[263707]: 2026-02-23 09:35:27.995063356 +0000 UTC m=+0.073822686 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:35:28 localhost sshd[263723]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:35:28 localhost podman[263707]: 2026-02-23 09:35:28.036918111 +0000 UTC m=+0.115677471 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:35:28 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:35:28 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:35:28.121 263679 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Feb 23 04:35:28 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:35:28.443 263679 INFO neutron.agent.dhcp.agent [None req-7ab8805f-56db-4adb-a717-d8cdf1e51c36 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 23 04:35:28 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:35:28.444 263679 INFO neutron.agent.dhcp.agent [None req-7ab8805f-56db-4adb-a717-d8cdf1e51c36 - - - - - -] Synchronizing state complete#033[00m Feb 23 04:35:28 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:35:28.470 263679 INFO neutron.agent.dhcp.agent [None req-7ab8805f-56db-4adb-a717-d8cdf1e51c36 - - - - - -] DHCP agent started#033[00m Feb 23 04:35:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:35:30 localhost podman[263732]: 2026-02-23 09:35:30.004907913 +0000 UTC m=+0.083112844 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9/ubi-minimal, release=1770267347, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7) Feb 23 04:35:30 localhost podman[263732]: 2026-02-23 09:35:30.020913619 +0000 UTC m=+0.099118560 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, architecture=x86_64, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:35:30 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:35:30 localhost sshd[263751]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:35:31 localhost openstack_network_exporter[243519]: ERROR 09:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:35:31 localhost openstack_network_exporter[243519]: Feb 23 04:35:31 localhost openstack_network_exporter[243519]: ERROR 09:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:35:31 localhost openstack_network_exporter[243519]: Feb 23 04:35:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:35:34 localhost podman[263753]: 2026-02-23 09:35:34.001397269 +0000 UTC m=+0.079875042 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller) Feb 23 04:35:34 localhost podman[263753]: 2026-02-23 09:35:34.066943583 +0000 UTC m=+0.145421336 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:35:34 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:35:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30551 DF PROTO=TCP SPT=43318 DPT=9102 SEQ=1663739101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0129EE260000000001030307) Feb 23 04:35:42 localhost podman[241086]: time="2026-02-23T09:35:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:35:42 localhost podman[241086]: @ - - [23/Feb/2026:09:35:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147347 "" "Go-http-client/1.1" Feb 23 04:35:42 localhost podman[241086]: @ - - [23/Feb/2026:09:35:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16315 "" "Go-http-client/1.1" Feb 23 04:35:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30552 DF PROTO=TCP SPT=43318 DPT=9102 SEQ=1663739101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0129F2440000000001030307) Feb 23 04:35:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63057 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=1369598627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0129F5830000000001030307) Feb 23 04:35:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30553 DF PROTO=TCP SPT=43318 DPT=9102 SEQ=1663739101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0129FA430000000001030307) Feb 23 04:35:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:35:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:35:46 localhost podman[263778]: 2026-02-23 09:35:46.006184039 +0000 UTC m=+0.079891901 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:35:46 localhost podman[263778]: 2026-02-23 09:35:46.012227175 +0000 UTC m=+0.085935087 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 23 04:35:46 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:35:46 localhost podman[263779]: 2026-02-23 09:35:46.059061135 +0000 UTC m=+0.130041188 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 23 04:35:46 localhost podman[263779]: 2026-02-23 09:35:46.069238901 +0000 UTC m=+0.140218944 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:35:46 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:35:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52666 DF PROTO=TCP SPT=60754 DPT=9102 SEQ=234241757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A0129FD830000000001030307) Feb 23 04:35:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:35:48.289 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:35:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:35:48.290 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:35:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:35:48.290 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:35:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30554 DF PROTO=TCP SPT=43318 DPT=9102 SEQ=1663739101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012A0A030000000001030307) Feb 23 04:35:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:35:50 localhost podman[263816]: 2026-02-23 09:35:50.002111499 +0000 UTC m=+0.076755091 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:35:50 localhost podman[263816]: 2026-02-23 09:35:50.011177352 +0000 UTC m=+0.085820994 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:35:50 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:35:54 localhost nova_compute[229873]: 2026-02-23 09:35:54.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.180 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.200 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.201 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.201 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.202 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.202 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:35:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30555 DF PROTO=TCP SPT=43318 DPT=9102 SEQ=1663739101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012A29840000000001030307) Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.661 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.837 229877 WARNING nova.virt.libvirt.driver [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.839 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12905MB free_disk=41.83688735961914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.840 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.840 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.920 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.920 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:35:57 localhost nova_compute[229873]: 2026-02-23 09:35:57.935 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:35:58 localhost nova_compute[229873]: 2026-02-23 09:35:58.371 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:35:58 localhost nova_compute[229873]: 2026-02-23 09:35:58.377 229877 DEBUG nova.compute.provider_tree [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:35:58 localhost nova_compute[229873]: 2026-02-23 09:35:58.396 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:35:58 localhost nova_compute[229873]: 2026-02-23 09:35:58.398 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:35:58 localhost nova_compute[229873]: 2026-02-23 09:35:58.399 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:35:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:35:58 localhost podman[263969]: 2026-02-23 09:35:58.977031112 +0000 UTC m=+0.055234874 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:35:58 localhost podman[263969]: 2026-02-23 09:35:58.983576053 +0000 UTC m=+0.061779835 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:35:58 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:36:00 localhost nova_compute[229873]: 2026-02-23 09:36:00.396 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:00 localhost nova_compute[229873]: 2026-02-23 09:36:00.397 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:00 localhost nova_compute[229873]: 2026-02-23 09:36:00.493 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:00 localhost nova_compute[229873]: 2026-02-23 09:36:00.493 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:36:00 localhost nova_compute[229873]: 2026-02-23 09:36:00.494 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:36:00 localhost nova_compute[229873]: 2026-02-23 09:36:00.510 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:36:00 localhost nova_compute[229873]: 2026-02-23 09:36:00.510 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:00 localhost nova_compute[229873]: 2026-02-23 09:36:00.511 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:36:01 localhost systemd[1]: tmp-crun.89DqHv.mount: Deactivated successfully. Feb 23 04:36:01 localhost podman[263992]: 2026-02-23 09:36:01.006919593 +0000 UTC m=+0.081357664 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, name=ubi9/ubi-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 23 04:36:01 localhost podman[263992]: 2026-02-23 09:36:01.023878886 +0000 UTC m=+0.098316937 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1770267347) Feb 23 04:36:01 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:36:01 localhost nova_compute[229873]: 2026-02-23 09:36:01.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:01 localhost nova_compute[229873]: 2026-02-23 09:36:01.182 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:36:01 localhost openstack_network_exporter[243519]: ERROR 09:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:36:01 localhost openstack_network_exporter[243519]: Feb 23 04:36:01 localhost openstack_network_exporter[243519]: ERROR 09:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:36:01 localhost openstack_network_exporter[243519]: Feb 23 04:36:02 localhost nova_compute[229873]: 2026-02-23 09:36:02.182 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:36:05 localhost podman[264012]: 2026-02-23 09:36:05.009384492 +0000 UTC m=+0.088268575 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:36:05 localhost podman[264012]: 2026-02-23 09:36:05.074877334 +0000 UTC m=+0.153761417 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:36:05 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:36:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45170 DF PROTO=TCP SPT=37252 DPT=9102 SEQ=3122475201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012A63560000000001030307) Feb 23 04:36:12 localhost podman[241086]: time="2026-02-23T09:36:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:36:12 localhost podman[241086]: @ - - [23/Feb/2026:09:36:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147347 "" "Go-http-client/1.1" Feb 23 04:36:12 localhost podman[241086]: @ - - [23/Feb/2026:09:36:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16311 "" "Go-http-client/1.1" Feb 23 04:36:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45171 DF PROTO=TCP SPT=37252 DPT=9102 SEQ=3122475201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012A67430000000001030307) Feb 23 04:36:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30556 DF PROTO=TCP SPT=43318 DPT=9102 SEQ=1663739101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012A69830000000001030307) Feb 23 04:36:15 localhost sshd[264037]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:36:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45172 DF PROTO=TCP SPT=37252 DPT=9102 SEQ=3122475201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012A6F430000000001030307) Feb 23 04:36:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63058 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=1369598627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012A73830000000001030307) Feb 23 04:36:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:36:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:36:17 localhost podman[264040]: 2026-02-23 09:36:17.011419533 +0000 UTC m=+0.083437115 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 23 04:36:17 localhost podman[264040]: 2026-02-23 09:36:17.020909788 +0000 UTC m=+0.092927350 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:36:17 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:36:17 localhost systemd[1]: tmp-crun.DRr2nv.mount: Deactivated successfully. Feb 23 04:36:17 localhost podman[264039]: 2026-02-23 09:36:17.112933771 +0000 UTC m=+0.185942912 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Feb 23 04:36:17 localhost podman[264039]: 2026-02-23 09:36:17.146932938 +0000 UTC m=+0.219942129 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:36:17 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:36:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45173 DF PROTO=TCP SPT=37252 DPT=9102 SEQ=3122475201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012A7F040000000001030307) Feb 23 04:36:19 localhost sshd[264077]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:36:19 localhost systemd-logind[759]: New session 58 of user zuul. Feb 23 04:36:19 localhost systemd[1]: Started Session 58 of User zuul. Feb 23 04:36:19 localhost sshd[264081]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:36:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:36:20 localhost systemd[1]: tmp-crun.1xsRB0.mount: Deactivated successfully. Feb 23 04:36:20 localhost podman[264138]: 2026-02-23 09:36:20.350271566 +0000 UTC m=+0.087250355 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:36:20 localhost podman[264138]: 2026-02-23 09:36:20.366872898 +0000 UTC m=+0.103851657 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:36:20 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:36:20 localhost python3.9[264212]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:36:22 localhost python3.9[264324]: ansible-ansible.builtin.service_facts Invoked Feb 23 04:36:22 localhost network[264341]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 04:36:22 localhost network[264342]: 'network-scripts' will be removed from distribution in near future. Feb 23 04:36:22 localhost network[264343]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:36:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45174 DF PROTO=TCP SPT=37252 DPT=9102 SEQ=3122475201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012A9F830000000001030307) Feb 23 04:36:28 localhost python3.9[264575]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 23 04:36:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:36:29 localhost systemd[1]: tmp-crun.IqQiFw.mount: Deactivated successfully. Feb 23 04:36:29 localhost podman[264639]: 2026-02-23 09:36:29.71219962 +0000 UTC m=+0.081058465 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:36:29 localhost podman[264639]: 2026-02-23 09:36:29.722853789 +0000 UTC m=+0.091712634 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:36:29 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:36:29 localhost python3.9[264638]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:36:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:36:31 localhost openstack_network_exporter[243519]: ERROR 09:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:36:31 localhost openstack_network_exporter[243519]: Feb 23 04:36:31 localhost openstack_network_exporter[243519]: ERROR 09:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:36:31 localhost openstack_network_exporter[243519]: Feb 23 04:36:32 localhost systemd[1]: tmp-crun.X5JUne.mount: Deactivated successfully. Feb 23 04:36:32 localhost podman[264664]: 2026-02-23 09:36:32.025213145 +0000 UTC m=+0.100636434 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 23 04:36:32 localhost podman[264664]: 2026-02-23 09:36:32.037640057 +0000 UTC m=+0.113063306 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, architecture=x86_64, release=1770267347, name=ubi9/ubi-minimal, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=) Feb 23 04:36:32 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:36:33 localhost python3.9[264793]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:36:34 localhost python3.9[264903]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:36:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:36:35 localhost systemd[1]: tmp-crun.Rw2mo5.mount: Deactivated successfully. Feb 23 04:36:35 localhost podman[265015]: 2026-02-23 09:36:35.537999691 +0000 UTC m=+0.089744898 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller) Feb 23 04:36:35 localhost podman[265015]: 2026-02-23 09:36:35.587497499 +0000 UTC m=+0.139242696 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:36:35 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:36:35 localhost python3.9[265014]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:36:36 localhost python3.9[265152]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:36:38 localhost python3.9[265262]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:36:38 localhost python3.9[265374]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:36:40 localhost python3.9[265484]: ansible-ansible.builtin.service_facts Invoked Feb 23 04:36:40 localhost network[265501]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 04:36:40 localhost network[265502]: 'network-scripts' will be removed from distribution in near future. Feb 23 04:36:40 localhost network[265503]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 04:36:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:36:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32505 DF PROTO=TCP SPT=35886 DPT=9102 SEQ=57957806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012AD8860000000001030307) Feb 23 04:36:42 localhost podman[241086]: time="2026-02-23T09:36:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:36:42 localhost podman[241086]: @ - - [23/Feb/2026:09:36:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147347 "" "Go-http-client/1.1" Feb 23 04:36:42 localhost podman[241086]: @ - - [23/Feb/2026:09:36:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16311 "" "Go-http-client/1.1" Feb 23 04:36:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32506 DF PROTO=TCP SPT=35886 DPT=9102 SEQ=57957806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012ADC830000000001030307) Feb 23 04:36:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45175 DF PROTO=TCP SPT=37252 DPT=9102 SEQ=3122475201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012ADF830000000001030307) Feb 23 04:36:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32507 DF PROTO=TCP SPT=35886 DPT=9102 SEQ=57957806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012AE4840000000001030307) Feb 23 04:36:45 localhost python3.9[265735]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:36:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30557 DF PROTO=TCP SPT=43318 DPT=9102 SEQ=1663739101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012AE7830000000001030307) Feb 23 04:36:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:36:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:36:47 localhost systemd[1]: tmp-crun.BRryye.mount: Deactivated successfully. Feb 23 04:36:48 localhost podman[265739]: 2026-02-23 09:36:47.999214203 +0000 UTC m=+0.070275928 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 23 04:36:48 localhost podman[265739]: 2026-02-23 09:36:48.033874205 +0000 UTC m=+0.104935960 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:36:48 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:36:48 localhost podman[265738]: 2026-02-23 09:36:48.106078735 +0000 UTC m=+0.177309045 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 04:36:48 localhost podman[265738]: 2026-02-23 09:36:48.115680631 +0000 UTC m=+0.186910971 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260216, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Feb 23 04:36:48 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:36:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:36:48.291 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:36:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:36:48.292 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:36:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:36:48.292 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:36:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32508 DF PROTO=TCP SPT=35886 DPT=9102 SEQ=57957806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012AF4430000000001030307) Feb 23 04:36:50 localhost python3.9[265881]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 23 04:36:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:36:50 localhost podman[265992]: 2026-02-23 09:36:50.794920883 +0000 UTC m=+0.083004036 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:36:50 localhost podman[265992]: 2026-02-23 09:36:50.807816398 +0000 UTC m=+0.095899581 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:36:50 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:36:50 localhost python3.9[265991]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Feb 23 04:36:52 localhost python3.9[266124]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:36:53 localhost python3.9[266181]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:36:54 localhost python3.9[266291]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:36:54 localhost python3.9[266401]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:36:55 localhost nova_compute[229873]: 2026-02-23 09:36:55.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:55 localhost python3.9[266512]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.086 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:36:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:36:56 localhost python3.9[266623]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:36:57 localhost nova_compute[229873]: 2026-02-23 09:36:57.182 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32509 DF PROTO=TCP SPT=35886 DPT=9102 SEQ=57957806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012B15830000000001030307) Feb 23 04:36:58 localhost python3.9[266790]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:36:59 localhost python3.9[266933]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.177 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.180 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.180 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.221 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.221 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.221 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.221 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.222 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.620 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.822 229877 WARNING nova.virt.libvirt.driver [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.824 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12916MB free_disk=41.83688735961914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.825 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.825 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.883 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.884 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:36:59 localhost nova_compute[229873]: 2026-02-23 09:36:59.914 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:36:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:37:00 localhost systemd[1]: tmp-crun.M4evJu.mount: Deactivated successfully. Feb 23 04:37:00 localhost podman[266974]: 2026-02-23 09:37:00.031499874 +0000 UTC m=+0.098822349 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:37:00 localhost podman[266974]: 2026-02-23 09:37:00.04383525 +0000 UTC m=+0.111157695 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:37:00 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:37:00 localhost nova_compute[229873]: 2026-02-23 09:37:00.363 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:37:00 localhost nova_compute[229873]: 2026-02-23 09:37:00.370 229877 DEBUG nova.compute.provider_tree [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:37:00 localhost nova_compute[229873]: 2026-02-23 09:37:00.385 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:37:00 localhost nova_compute[229873]: 2026-02-23 09:37:00.388 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:37:00 localhost nova_compute[229873]: 2026-02-23 09:37:00.389 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:37:00 localhost python3.9[267110]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:01 localhost python3.9[267220]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:01 localhost python3.9[267330]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:01 localhost openstack_network_exporter[243519]: ERROR 09:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:37:01 localhost openstack_network_exporter[243519]: Feb 23 04:37:01 localhost openstack_network_exporter[243519]: ERROR 09:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:37:01 localhost openstack_network_exporter[243519]: Feb 23 04:37:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:37:02 localhost podman[267435]: 2026-02-23 09:37:02.240552929 +0000 UTC m=+0.079758569 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, release=1770267347, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 04:37:02 localhost podman[267435]: 2026-02-23 09:37:02.250871079 +0000 UTC m=+0.090076729 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.7, release=1770267347, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 23 04:37:02 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:37:02 localhost nova_compute[229873]: 2026-02-23 09:37:02.391 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:37:02 localhost nova_compute[229873]: 2026-02-23 09:37:02.391 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:37:02 localhost nova_compute[229873]: 2026-02-23 09:37:02.392 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:37:02 localhost nova_compute[229873]: 2026-02-23 09:37:02.410 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:37:02 localhost nova_compute[229873]: 2026-02-23 09:37:02.411 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:37:02 localhost python3.9[267451]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:02 localhost sshd[267459]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:37:03 localhost nova_compute[229873]: 2026-02-23 09:37:03.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:37:03 localhost nova_compute[229873]: 2026-02-23 09:37:03.181 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:37:03 localhost python3.9[267570]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:37:04 localhost nova_compute[229873]: 2026-02-23 09:37:04.182 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:37:04 localhost python3.9[267682]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:37:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:37:06 localhost podman[267794]: 2026-02-23 09:37:06.019498955 +0000 UTC m=+0.081211458 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, container_name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:37:06 localhost podman[267794]: 2026-02-23 09:37:06.059051628 +0000 UTC m=+0.120764101 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260216) Feb 23 04:37:06 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:37:06 localhost python3.9[267795]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:37:07 localhost python3.9[267929]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 23 04:37:07 localhost python3.9[268039]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Feb 23 04:37:08 localhost python3.9[268149]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:37:08 localhost python3.9[268206]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:09 localhost python3.9[268316]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:10 localhost python3.9[268426]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 23 04:37:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15076 DF PROTO=TCP SPT=46464 DPT=9102 SEQ=2410757475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012B4DB60000000001030307) Feb 23 04:37:12 localhost podman[241086]: time="2026-02-23T09:37:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:37:12 localhost podman[241086]: @ - - [23/Feb/2026:09:37:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147347 "" "Go-http-client/1.1" Feb 23 04:37:12 localhost podman[241086]: @ - - [23/Feb/2026:09:37:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16316 "" "Go-http-client/1.1" Feb 23 04:37:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15077 DF PROTO=TCP SPT=46464 DPT=9102 SEQ=2410757475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012B51C40000000001030307) Feb 23 04:37:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32510 DF PROTO=TCP SPT=35886 DPT=9102 SEQ=57957806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012B55840000000001030307) Feb 23 04:37:14 localhost python3.9[268536]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 23 04:37:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15078 DF PROTO=TCP SPT=46464 DPT=9102 SEQ=2410757475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012B59C30000000001030307) Feb 23 04:37:15 localhost python3.9[268650]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45176 DF PROTO=TCP SPT=37252 DPT=9102 SEQ=3122475201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012B5D830000000001030307) Feb 23 04:37:16 localhost python3.9[268760]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:37:16 localhost systemd[1]: Reloading. Feb 23 04:37:17 localhost systemd-sysv-generator[268788]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:37:17 localhost systemd-rc-local-generator[268785]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:37:17 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:17 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:17 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:17 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:37:17 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:17 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:17 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:17 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:17 localhost python3.9[268904]: ansible-ansible.builtin.service_facts Invoked Feb 23 04:37:17 localhost network[268921]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 23 04:37:17 localhost network[268922]: 'network-scripts' will be removed from distribution in near future. Feb 23 04:37:17 localhost network[268923]: It is advised to switch to 'NetworkManager' instead for network management. Feb 23 04:37:18 localhost sshd[268929]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:37:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:37:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:37:18 localhost podman[268933]: 2026-02-23 09:37:18.741345361 +0000 UTC m=+0.085035353 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:37:18 localhost podman[268933]: 2026-02-23 09:37:18.773841273 +0000 UTC m=+0.117531265 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS) Feb 23 04:37:18 localhost podman[268934]: 2026-02-23 09:37:18.790173341 +0000 UTC m=+0.132787308 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:37:18 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:37:18 localhost podman[268934]: 2026-02-23 09:37:18.804688259 +0000 UTC m=+0.147302196 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 23 04:37:18 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:37:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:37:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15079 DF PROTO=TCP SPT=46464 DPT=9102 SEQ=2410757475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012B69830000000001030307) Feb 23 04:37:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:37:20 localhost podman[269080]: 2026-02-23 09:37:20.929425786 +0000 UTC m=+0.080137242 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:37:20 localhost podman[269080]: 2026-02-23 09:37:20.942869709 +0000 UTC m=+0.093581135 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:37:20 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:37:24 localhost python3.9[269212]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:37:25 localhost python3.9[269323]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:37:26 localhost python3.9[269434]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:37:27 localhost python3.9[269545]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:37:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15080 DF PROTO=TCP SPT=46464 DPT=9102 SEQ=2410757475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012B89830000000001030307) Feb 23 04:37:28 localhost python3.9[269656]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:37:28 localhost python3.9[269767]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:37:29 localhost python3.9[269878]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:37:30 localhost python3.9[269989]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:37:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:37:30 localhost podman[269991]: 2026-02-23 09:37:30.336324627 +0000 UTC m=+0.071122905 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:37:30 localhost podman[269991]: 2026-02-23 09:37:30.375033533 +0000 UTC m=+0.109831831 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:37:30 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:37:31 localhost python3.9[270123]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:32 localhost openstack_network_exporter[243519]: ERROR 09:37:32 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:37:32 localhost openstack_network_exporter[243519]: Feb 23 04:37:32 localhost openstack_network_exporter[243519]: ERROR 09:37:32 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:37:32 localhost openstack_network_exporter[243519]: Feb 23 04:37:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:37:32 localhost podman[270234]: 2026-02-23 09:37:32.576727306 +0000 UTC m=+0.079006835 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, container_name=openstack_network_exporter, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64) Feb 23 04:37:32 localhost podman[270234]: 2026-02-23 09:37:32.616890759 +0000 UTC m=+0.119170228 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 04:37:32 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:37:32 localhost python3.9[270233]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:33 localhost python3.9[270363]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:33 localhost python3.9[270473]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:35 localhost python3.9[270583]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:35 localhost python3.9[270693]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:37:37 localhost podman[270804]: 2026-02-23 09:37:37.008611012 +0000 UTC m=+0.086713069 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 23 04:37:37 localhost podman[270804]: 2026-02-23 09:37:37.084034098 +0000 UTC m=+0.162136205 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:37:37 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:37:37 localhost python3.9[270803]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:37 localhost python3.9[270938]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:38 localhost python3.9[271048]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:39 localhost python3.9[271158]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:39 localhost python3.9[271268]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:40 localhost python3.9[271378]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:40 localhost python3.9[271488]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:41 localhost python3.9[271598]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:42 localhost python3.9[271708]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3219 DF PROTO=TCP SPT=55440 DPT=9102 SEQ=612961428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012BC2E60000000001030307) Feb 23 04:37:42 localhost python3.9[271818]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:37:42 localhost podman[241086]: time="2026-02-23T09:37:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:37:42 localhost podman[241086]: @ - - [23/Feb/2026:09:37:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147347 "" "Go-http-client/1.1" Feb 23 04:37:42 localhost podman[241086]: @ - - [23/Feb/2026:09:37:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16310 "" "Go-http-client/1.1" Feb 23 04:37:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3220 DF PROTO=TCP SPT=55440 DPT=9102 SEQ=612961428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012BC7040000000001030307) Feb 23 04:37:43 localhost python3.9[271928]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:37:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15081 DF PROTO=TCP SPT=46464 DPT=9102 SEQ=2410757475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012BC9840000000001030307) Feb 23 04:37:44 localhost python3.9[272038]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 23 04:37:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3221 DF PROTO=TCP SPT=55440 DPT=9102 SEQ=612961428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012BCF030000000001030307) Feb 23 04:37:46 localhost python3.9[272148]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 23 04:37:46 localhost systemd[1]: Reloading. Feb 23 04:37:46 localhost systemd-sysv-generator[272174]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:37:46 localhost systemd-rc-local-generator[272171]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:37:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32511 DF PROTO=TCP SPT=35886 DPT=9102 SEQ=57957806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012BD3830000000001030307) Feb 23 04:37:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:37:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:37:47 localhost python3.9[272293]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:37:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:37:48.292 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:37:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:37:48.293 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:37:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:37:48.293 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:37:48 localhost python3.9[272404]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:37:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:37:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:37:48 localhost podman[272517]: 2026-02-23 09:37:48.993450128 +0000 UTC m=+0.090690640 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:37:49 localhost podman[272516]: 2026-02-23 09:37:49.039409913 +0000 UTC m=+0.137989580 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Feb 23 04:37:49 localhost podman[272516]: 2026-02-23 09:37:49.049708372 +0000 UTC m=+0.148288049 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:37:49 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:37:49 localhost podman[272517]: 2026-02-23 09:37:49.060770696 +0000 UTC m=+0.158011208 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:37:49 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:37:49 localhost python3.9[272515]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:37:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3222 DF PROTO=TCP SPT=55440 DPT=9102 SEQ=612961428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012BDEC30000000001030307) Feb 23 04:37:49 localhost python3.9[272661]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:37:50 localhost python3.9[272772]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:37:51 localhost python3.9[272883]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:37:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:37:51 localhost podman[272885]: 2026-02-23 09:37:51.132143865 +0000 UTC m=+0.054694224 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:37:51 localhost podman[272885]: 2026-02-23 09:37:51.138613448 +0000 UTC m=+0.061163807 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:37:51 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:37:51 localhost sshd[272979]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:37:51 localhost python3.9[273019]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:37:52 localhost python3.9[273130]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:37:54 localhost nova_compute[229873]: 2026-02-23 09:37:54.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:37:54 localhost nova_compute[229873]: 2026-02-23 09:37:54.181 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 23 04:37:54 localhost nova_compute[229873]: 2026-02-23 09:37:54.216 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 23 04:37:54 localhost python3.9[273241]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:37:55 localhost python3.9[273351]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:37:56 localhost nova_compute[229873]: 2026-02-23 09:37:56.216 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:37:56 localhost python3.9[273461]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:37:57 localhost python3.9[273571]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:37:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3223 DF PROTO=TCP SPT=55440 DPT=9102 SEQ=612961428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012BFF830000000001030307) Feb 23 04:37:58 localhost python3.9[273681]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:37:58 localhost nova_compute[229873]: 2026-02-23 09:37:58.182 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:37:58 localhost python3.9[273791]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 23 04:37:59 localhost python3.9[273967]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:38:00 localhost nova_compute[229873]: 2026-02-23 09:38:00.177 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:00 localhost nova_compute[229873]: 2026-02-23 09:38:00.180 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:00 localhost python3.9[274078]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:38:00 localhost systemd[1]: tmp-crun.bDFpJ7.mount: Deactivated successfully. Feb 23 04:38:00 localhost podman[274189]: 2026-02-23 09:38:00.94694658 +0000 UTC m=+0.104761553 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:38:00 localhost podman[274189]: 2026-02-23 09:38:00.982880544 +0000 UTC m=+0.140695507 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:38:00 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:38:01 localhost python3.9[274188]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.198 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.199 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.199 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.200 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.200 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.664 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.865 229877 WARNING nova.virt.libvirt.driver [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.866 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12899MB free_disk=41.83688735961914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.866 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:38:01 localhost nova_compute[229873]: 2026-02-23 09:38:01.867 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:38:01 localhost openstack_network_exporter[243519]: ERROR 09:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:38:01 localhost openstack_network_exporter[243519]: Feb 23 04:38:01 localhost openstack_network_exporter[243519]: ERROR 09:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:38:01 localhost openstack_network_exporter[243519]: Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.052 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.052 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.067 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Refreshing inventories for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.148 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Updating ProviderTree inventory for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.149 229877 DEBUG nova.compute.provider_tree [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.161 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Refreshing aggregate associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.193 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Refreshing trait associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, traits: HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI2,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.206 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.713 229877 DEBUG oslo_concurrency.processutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.718 229877 DEBUG nova.compute.provider_tree [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.743 229877 DEBUG nova.scheduler.client.report [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.746 229877 DEBUG nova.compute.resource_tracker [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.746 229877 DEBUG oslo_concurrency.lockutils [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.747 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:02 localhost nova_compute[229873]: 2026-02-23 09:38:02.747 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 23 04:38:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:38:03 localhost systemd[1]: tmp-crun.oiDMdR.mount: Deactivated successfully. Feb 23 04:38:03 localhost podman[274273]: 2026-02-23 09:38:03.015129933 +0000 UTC m=+0.089212331 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, distribution-scope=public, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.) Feb 23 04:38:03 localhost podman[274273]: 2026-02-23 09:38:03.031862494 +0000 UTC m=+0.105944852 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, release=1770267347, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:38:03 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:38:03 localhost nova_compute[229873]: 2026-02-23 09:38:03.759 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:03 localhost nova_compute[229873]: 2026-02-23 09:38:03.831 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:03 localhost nova_compute[229873]: 2026-02-23 09:38:03.831 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:38:03 localhost nova_compute[229873]: 2026-02-23 09:38:03.832 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:38:03 localhost nova_compute[229873]: 2026-02-23 09:38:03.931 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:38:03 localhost nova_compute[229873]: 2026-02-23 09:38:03.932 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:03 localhost nova_compute[229873]: 2026-02-23 09:38:03.932 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:03 localhost nova_compute[229873]: 2026-02-23 09:38:03.933 229877 DEBUG nova.compute.manager [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:38:04 localhost nova_compute[229873]: 2026-02-23 09:38:04.181 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:06 localhost nova_compute[229873]: 2026-02-23 09:38:06.201 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:38:08 localhost systemd[1]: tmp-crun.kqguZQ.mount: Deactivated successfully. Feb 23 04:38:08 localhost podman[274313]: 2026-02-23 09:38:08.016672574 +0000 UTC m=+0.095256660 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216) Feb 23 04:38:08 localhost podman[274313]: 2026-02-23 09:38:08.056703603 +0000 UTC m=+0.135287659 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS) Feb 23 04:38:08 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:38:09 localhost python3.9[274429]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Feb 23 04:38:11 localhost sshd[274448]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:38:11 localhost systemd-logind[759]: New session 59 of user zuul. Feb 23 04:38:11 localhost systemd[1]: Started Session 59 of User zuul. Feb 23 04:38:11 localhost systemd[1]: session-59.scope: Deactivated successfully. Feb 23 04:38:11 localhost systemd-logind[759]: Session 59 logged out. Waiting for processes to exit. Feb 23 04:38:11 localhost systemd-logind[759]: Removed session 59. Feb 23 04:38:12 localhost python3.9[274559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:38:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45049 DF PROTO=TCP SPT=41872 DPT=9102 SEQ=2811167406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012C38160000000001030307) Feb 23 04:38:12 localhost python3.9[274614]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:38:12 localhost podman[241086]: time="2026-02-23T09:38:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:38:12 localhost podman[241086]: @ - - [23/Feb/2026:09:38:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147347 "" "Go-http-client/1.1" Feb 23 04:38:12 localhost podman[241086]: @ - - [23/Feb/2026:09:38:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16306 "" "Go-http-client/1.1" Feb 23 04:38:13 localhost python3.9[274722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:38:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45050 DF PROTO=TCP SPT=41872 DPT=9102 SEQ=2811167406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012C3C040000000001030307) Feb 23 04:38:13 localhost python3.9[274808]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839492.6715608-2357-173753346462362/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:38:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3224 DF PROTO=TCP SPT=55440 DPT=9102 SEQ=612961428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012C3F830000000001030307) Feb 23 04:38:14 localhost python3.9[274916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:38:14 localhost python3.9[275002]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839493.8054123-2357-26543978644302/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:38:15 localhost python3.9[275110]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:38:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45051 DF PROTO=TCP SPT=41872 DPT=9102 SEQ=2811167406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012C44030000000001030307) Feb 23 04:38:15 localhost python3.9[275196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839494.8238144-2357-121613451888163/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:38:15 localhost sshd[275214]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:38:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15082 DF PROTO=TCP SPT=46464 DPT=9102 SEQ=2410757475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012C47840000000001030307) Feb 23 04:38:16 localhost python3.9[275306]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:38:16 localhost python3.9[275392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771839496.0121756-2518-250517037334892/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=95a2c7dca6af5923d2d1d47aee71aa571417ed85 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:38:17 localhost python3.9[275502]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:18 localhost python3.9[275612]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45052 DF PROTO=TCP SPT=41872 DPT=9102 SEQ=2811167406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012C53C30000000001030307) Feb 23 04:38:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:38:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:38:19 localhost systemd[1]: tmp-crun.Zq508M.mount: Deactivated successfully. Feb 23 04:38:19 localhost podman[275723]: 2026-02-23 09:38:19.498236194 +0000 UTC m=+0.091937351 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS) Feb 23 04:38:19 localhost podman[275723]: 2026-02-23 09:38:19.53088947 +0000 UTC m=+0.124590617 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:38:19 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:38:19 localhost systemd[1]: tmp-crun.iFfHS7.mount: Deactivated successfully. Feb 23 04:38:19 localhost python3.9[275722]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:38:19 localhost podman[275724]: 2026-02-23 09:38:19.596112659 +0000 UTC m=+0.190599152 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Feb 23 04:38:19 localhost podman[275724]: 2026-02-23 09:38:19.609849132 +0000 UTC m=+0.204335605 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:38:19 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:38:20 localhost python3.9[275871]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:21 localhost python3.9[275979]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:38:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:38:21 localhost podman[275999]: 2026-02-23 09:38:21.989228782 +0000 UTC m=+0.068129337 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:38:22 localhost podman[275999]: 2026-02-23 09:38:22.00283826 +0000 UTC m=+0.081738815 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:38:22 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:38:22 localhost python3.9[276115]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:23 localhost python3.9[276225]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:38:24 localhost python3.9[276333]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:26 localhost python3.9[276639]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False Feb 23 04:38:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45053 DF PROTO=TCP SPT=41872 DPT=9102 SEQ=2811167406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012C73830000000001030307) Feb 23 04:38:27 localhost python3.9[276749]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 23 04:38:28 localhost python3[276859]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False Feb 23 04:38:29 localhost python3[276859]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "72feed39d002da96e9458f5df3225bc8b72f1ae28f906a4ea01e253f86aab9e3",#012 "Digest": "sha256:60339e5e0cd7bfe18718bee79174c18ef91b932586fd96f01b9799d5d120385d",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:60339e5e0cd7bfe18718bee79174c18ef91b932586fd96f01b9799d5d120385d"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-02-23T06:27:42.035349623Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.43.0",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260216",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "8419493e1fd846703d277695e03fc5eb",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1216089983,#012 "VirtualSize": 1216089983,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/239567307c66a10c4dd721df6a9263fcc38501437d275d2b4907c616b635d111/diff:/var/lib/containers/storage/overlay/0455f1f13172510bfb03afa514ad1dc5f28a2039a4c0ae85e44e0cde63814ca4/diff:/var/lib/containers/storage/overlay/882df85a0cf43e46bc799aafd5ff81035654b304c2fef5dbd26c9dd0c2e9fcc3/diff:/var/lib/containers/storage/overlay/d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/7e5a6b3af0e35b266ef2f57ba1f524615772066427004655f3e99c4f9072865c/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/7e5a6b3af0e35b266ef2f57ba1f524615772066427004655f3e99c4f9072865c/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d",#012 "sha256:6eb5d45c6942983139aec78264b4b68bafe46465bb40e2bb4c09e78dad8ba6c0",#012 "sha256:9a59f9675e4fdfdb0eaa24dcce26bed374feef6430ea888b6f5ef1274a95bd90",#012 "sha256:5511acb0625eca242fd47549a8bafd7826358a029c48a9158ddd6fa2b7e0b86d",#012 "sha256:1f1e90f8b2058c74071fe0298f6d20f4d1edbde3bdd940d26fcd35c036f677a8"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.43.0",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260216",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "8419493e1fd846703d277695e03fc5eb",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2026-02-17T01:25:07.246646992Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:d064f128d9bf147a386d5c0e8c2e8a6f698c81fb4e2404e09afe5ef1e1d3b529 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-17T01:25:07.246739119Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260216\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-17T01:25:12.132997501Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-02-23T06:08:39.081651802Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081666472Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081677733Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081688343Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081701553Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081710413Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.413481757Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:09:13.490649497Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Feb 23 04:38:29 localhost podman[276912]: 2026-02-23 09:38:29.200756339 +0000 UTC m=+0.063373040 container remove 3be7d315f599dc812d2d03af87c7fb706be7c60341f29f7efc5a148f2848a784 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, container_name=nova_compute_init, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260216, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Feb 23 04:38:29 localhost python3[276859]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute_init Feb 23 04:38:29 localhost podman[276925]: Feb 23 04:38:29 localhost podman[276925]: 2026-02-23 09:38:29.305980697 +0000 UTC m=+0.087428903 container create 0f75359bbedbb856360b80577e0836daf6bc3d37696c0c9e1d48bd5de62ba66a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Feb 23 04:38:29 localhost podman[276925]: 2026-02-23 09:38:29.263324651 +0000 UTC m=+0.044772877 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 23 04:38:29 localhost python3[276859]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Feb 23 04:38:30 localhost python3.9[277071]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:38:31 localhost python3.9[277181]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 23 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:38:31 localhost openstack_network_exporter[243519]: ERROR 09:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:38:31 localhost openstack_network_exporter[243519]: Feb 23 04:38:31 localhost openstack_network_exporter[243519]: ERROR 09:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:38:31 localhost openstack_network_exporter[243519]: Feb 23 04:38:32 localhost podman[277199]: 2026-02-23 09:38:32.022822209 +0000 UTC m=+0.090115171 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:38:32 localhost podman[277199]: 2026-02-23 09:38:32.036886492 +0000 UTC m=+0.104179454 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:38:32 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:38:32 localhost python3.9[277315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:38:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:38:33 localhost systemd[1]: tmp-crun.oA1Vuu.mount: Deactivated successfully. Feb 23 04:38:33 localhost podman[277406]: 2026-02-23 09:38:33.218502086 +0000 UTC m=+0.094633290 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, architecture=x86_64) Feb 23 04:38:33 localhost podman[277406]: 2026-02-23 09:38:33.230560643 +0000 UTC m=+0.106691867 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, release=1770267347, architecture=x86_64, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 04:38:33 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:38:33 localhost python3.9[277405]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771839512.242634-2989-263667278085220/.source.yaml _original_basename=.92ksswqy follow=False checksum=f9aa9ce623bd0367523b1516d0fd40e0aad40b65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:34 localhost python3.9[277537]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:35 localhost python3.9[277647]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 23 04:38:36 localhost python3.9[277757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:38:36 localhost python3.9[277814]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/nova_compute.json _original_basename=.cegroeb2 recurse=False state=file path=/var/lib/kolla/config_files/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:37 localhost python3.9[277922]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:38 localhost sshd[278066]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:38:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:38:39 localhost podman[278087]: 2026-02-23 09:38:39.039088262 +0000 UTC m=+0.113085669 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0) Feb 23 04:38:39 localhost podman[278087]: 2026-02-23 09:38:39.099216723 +0000 UTC m=+0.173214060 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0) Feb 23 04:38:39 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:38:40 localhost python3.9[278255]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False Feb 23 04:38:41 localhost python3.9[278365]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 23 04:38:41 localhost sshd[278383]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:38:42 localhost python3[278477]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False Feb 23 04:38:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30780 DF PROTO=TCP SPT=38026 DPT=9102 SEQ=290784186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012CAD460000000001030307) Feb 23 04:38:42 localhost python3[278477]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "72feed39d002da96e9458f5df3225bc8b72f1ae28f906a4ea01e253f86aab9e3",#012 "Digest": "sha256:60339e5e0cd7bfe18718bee79174c18ef91b932586fd96f01b9799d5d120385d",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:60339e5e0cd7bfe18718bee79174c18ef91b932586fd96f01b9799d5d120385d"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-02-23T06:27:42.035349623Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.43.0",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260216",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "8419493e1fd846703d277695e03fc5eb",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1216089983,#012 "VirtualSize": 1216089983,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/239567307c66a10c4dd721df6a9263fcc38501437d275d2b4907c616b635d111/diff:/var/lib/containers/storage/overlay/0455f1f13172510bfb03afa514ad1dc5f28a2039a4c0ae85e44e0cde63814ca4/diff:/var/lib/containers/storage/overlay/882df85a0cf43e46bc799aafd5ff81035654b304c2fef5dbd26c9dd0c2e9fcc3/diff:/var/lib/containers/storage/overlay/d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/7e5a6b3af0e35b266ef2f57ba1f524615772066427004655f3e99c4f9072865c/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/7e5a6b3af0e35b266ef2f57ba1f524615772066427004655f3e99c4f9072865c/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:d9f14c75a7289cf010d2e5175c554193dba109f864fe39fc418f3bc5b90efe9d",#012 "sha256:6eb5d45c6942983139aec78264b4b68bafe46465bb40e2bb4c09e78dad8ba6c0",#012 "sha256:9a59f9675e4fdfdb0eaa24dcce26bed374feef6430ea888b6f5ef1274a95bd90",#012 "sha256:5511acb0625eca242fd47549a8bafd7826358a029c48a9158ddd6fa2b7e0b86d",#012 "sha256:1f1e90f8b2058c74071fe0298f6d20f4d1edbde3bdd940d26fcd35c036f677a8"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.43.0",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260216",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "8419493e1fd846703d277695e03fc5eb",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2026-02-17T01:25:07.246646992Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:d064f128d9bf147a386d5c0e8c2e8a6f698c81fb4e2404e09afe5ef1e1d3b529 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-17T01:25:07.246739119Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260216\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-17T01:25:12.132997501Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-02-23T06:08:39.081651802Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081666472Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081677733Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081688343Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081701553Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.081710413Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:08:39.413481757Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-02-23T06:09:13.490649497Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Feb 23 04:38:42 localhost podman[241086]: time="2026-02-23T09:38:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:38:42 localhost podman[241086]: @ - - [23/Feb/2026:09:38:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147354 "" "Go-http-client/1.1" Feb 23 04:38:42 localhost podman[241086]: @ - - [23/Feb/2026:09:38:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16187 "" "Go-http-client/1.1" Feb 23 04:38:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30781 DF PROTO=TCP SPT=38026 DPT=9102 SEQ=290784186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012CB1440000000001030307) Feb 23 04:38:43 localhost nova_compute[229873]: 2026-02-23 09:38:43.716 229877 DEBUG oslo_service.periodic_task [None req-e5cf2bda-8ef2-4803-bdd3-2f9e0e3cde4d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:38:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45054 DF PROTO=TCP SPT=41872 DPT=9102 SEQ=2811167406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012CB3830000000001030307) Feb 23 04:38:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30782 DF PROTO=TCP SPT=38026 DPT=9102 SEQ=290784186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012CB9430000000001030307) Feb 23 04:38:45 localhost nova_compute[229873]: 2026-02-23 09:38:45.741 229877 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Feb 23 04:38:45 localhost nova_compute[229873]: 2026-02-23 09:38:45.743 229877 DEBUG oslo_concurrency.lockutils [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:38:45 localhost nova_compute[229873]: 2026-02-23 09:38:45.743 229877 DEBUG oslo_concurrency.lockutils [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:38:45 localhost nova_compute[229873]: 2026-02-23 09:38:45.744 229877 DEBUG oslo_concurrency.lockutils [None req-812d24d6-8162-4fab-95c3-67c0da27b714 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:38:46 localhost journal[228928]: End of file while reading data: Input/output error Feb 23 04:38:46 localhost systemd[1]: libpod-6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3.scope: Deactivated successfully. Feb 23 04:38:46 localhost systemd[1]: libpod-6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3.scope: Consumed 15.344s CPU time. Feb 23 04:38:46 localhost podman[278527]: 2026-02-23 09:38:46.115351333 +0000 UTC m=+3.698522099 container died 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.license=GPLv2, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216) Feb 23 04:38:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3-userdata-shm.mount: Deactivated successfully. Feb 23 04:38:46 localhost podman[278527]: 2026-02-23 09:38:46.277124354 +0000 UTC m=+3.860295150 container cleanup 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_id=nova_compute) Feb 23 04:38:46 localhost python3[278477]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop nova_compute Feb 23 04:38:46 localhost podman[278540]: 2026-02-23 09:38:46.289187802 +0000 UTC m=+0.158561257 container cleanup 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute) Feb 23 04:38:46 localhost podman[278557]: 2026-02-23 09:38:46.403985205 +0000 UTC m=+0.087566427 container remove 6d2451cb4fe8c2539498df6ee33c5aa914de117b02387bc0fa2dd7e4fdc263e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=nova_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-6b39f93504c76e9a3e663e0141bfc889f6a01f67720151225a4bd3a4b559dd52'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260216) Feb 23 04:38:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3225 DF PROTO=TCP SPT=55440 DPT=9102 SEQ=612961428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012CBD830000000001030307) Feb 23 04:38:46 localhost python3[278477]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Feb 23 04:38:46 localhost podman[278559]: Error: no container with name or ID "nova_compute" found: no such container Feb 23 04:38:46 localhost systemd[1]: edpm_nova_compute.service: Control process exited, code=exited, status=125/n/a Feb 23 04:38:46 localhost podman[278588]: Error: no container with name or ID "nova_compute" found: no such container Feb 23 04:38:46 localhost systemd[1]: edpm_nova_compute.service: Control process exited, code=exited, status=125/n/a Feb 23 04:38:46 localhost systemd[1]: edpm_nova_compute.service: Failed with result 'exit-code'. Feb 23 04:38:46 localhost podman[278582]: Feb 23 04:38:46 localhost podman[278582]: 2026-02-23 09:38:46.52185681 +0000 UTC m=+0.099968936 container create bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, config_id=nova_compute, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 04:38:46 localhost podman[278582]: 2026-02-23 09:38:46.47604943 +0000 UTC m=+0.054161616 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 23 04:38:46 localhost python3[278477]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Feb 23 04:38:46 localhost systemd[1]: edpm_nova_compute.service: Scheduled restart job, restart counter is at 1. Feb 23 04:38:46 localhost systemd[1]: Started libpod-conmon-bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d.scope. Feb 23 04:38:46 localhost systemd[1]: Stopped nova_compute container. Feb 23 04:38:46 localhost systemd[1]: Starting nova_compute container... Feb 23 04:38:46 localhost systemd[1]: Started libcrun container. Feb 23 04:38:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 23 04:38:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 23 04:38:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 04:38:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 23 04:38:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 04:38:46 localhost podman[278606]: 2026-02-23 09:38:46.688628506 +0000 UTC m=+0.150389667 container init bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute) Feb 23 04:38:46 localhost podman[278606]: 2026-02-23 09:38:46.699251636 +0000 UTC m=+0.161012807 container start bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS) Feb 23 04:38:46 localhost nova_compute[278622]: + sudo -E kolla_set_configs Feb 23 04:38:46 localhost python3[278477]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start nova_compute Feb 23 04:38:46 localhost systemd[1]: Started nova_compute container. Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Validating config file Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying service configuration files Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Deleting /etc/ceph Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Creating directory /etc/ceph Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/ceph Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Writing out command to execute Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:38:46 localhost nova_compute[278622]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 23 04:38:46 localhost nova_compute[278622]: ++ cat /run_command Feb 23 04:38:46 localhost nova_compute[278622]: + CMD=nova-compute Feb 23 04:38:46 localhost nova_compute[278622]: + ARGS= Feb 23 04:38:46 localhost nova_compute[278622]: + sudo kolla_copy_cacerts Feb 23 04:38:46 localhost nova_compute[278622]: + [[ ! -n '' ]] Feb 23 04:38:46 localhost nova_compute[278622]: + . kolla_extend_start Feb 23 04:38:46 localhost nova_compute[278622]: + echo 'Running command: '\''nova-compute'\''' Feb 23 04:38:46 localhost nova_compute[278622]: Running command: 'nova-compute' Feb 23 04:38:46 localhost nova_compute[278622]: + umask 0022 Feb 23 04:38:46 localhost nova_compute[278622]: + exec nova-compute Feb 23 04:38:47 localhost systemd[1]: var-lib-containers-storage-overlay-15ca6780693012a594e87959f4dd1a217af451f5d8176199e7296056d94fc104-merged.mount: Deactivated successfully. Feb 23 04:38:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:38:48.292 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:38:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:38:48.293 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:38:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:38:48.294 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:38:48 localhost python3.9[278780]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:38:48 localhost nova_compute[278622]: 2026-02-23 09:38:48.460 278638 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:38:48 localhost nova_compute[278622]: 2026-02-23 09:38:48.460 278638 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:38:48 localhost nova_compute[278622]: 2026-02-23 09:38:48.460 278638 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:38:48 localhost nova_compute[278622]: 2026-02-23 09:38:48.461 278638 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 23 04:38:48 localhost nova_compute[278622]: 2026-02-23 09:38:48.578 278638 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:38:48 localhost nova_compute[278622]: 2026-02-23 09:38:48.599 278638 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:38:48 localhost nova_compute[278622]: 2026-02-23 09:38:48.600 278638 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 23 04:38:48 localhost nova_compute[278622]: 2026-02-23 09:38:48.994 278638 INFO nova.virt.driver [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.147 278638 INFO nova.compute.provider_config [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.157 278638 DEBUG oslo_concurrency.lockutils [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.157 278638 DEBUG oslo_concurrency.lockutils [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.157 278638 DEBUG oslo_concurrency.lockutils [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.158 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.158 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.158 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.158 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.158 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.159 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.159 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.159 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.159 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.159 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.159 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.159 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.160 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.160 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.160 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.160 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.160 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.160 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.160 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.161 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] console_host = np0005626465.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.161 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.161 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.161 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.161 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.161 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.161 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.162 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.162 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.162 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.162 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.162 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.162 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.163 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.163 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.163 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.163 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.163 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.163 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.164 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] host = np0005626465.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.164 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.164 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.164 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.164 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.164 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.164 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.165 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.165 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.165 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.165 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.165 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.165 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.166 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.166 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.166 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.166 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.166 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.166 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.166 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.167 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.167 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.167 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.167 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.167 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.167 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.167 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.168 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.168 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.168 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.168 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.168 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.168 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.168 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.169 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.169 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.169 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.169 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.169 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.169 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.169 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.170 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.170 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] my_block_storage_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.170 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] my_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.170 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.170 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.170 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.170 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.171 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.171 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.171 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.171 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.171 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.171 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.171 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.172 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.172 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.172 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.172 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.172 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.172 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.172 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.173 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.173 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.173 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.173 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.173 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.173 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.173 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.174 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.174 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.174 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.174 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.174 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.174 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.174 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.174 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.175 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.175 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.175 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.175 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.175 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.175 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.175 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.176 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.176 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.176 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.176 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.176 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.176 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.176 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.177 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.177 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.177 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.177 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.177 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.177 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.177 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.178 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.178 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.178 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.178 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.178 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.178 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.179 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.179 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.179 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.179 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.179 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.179 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.179 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.180 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.180 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.180 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.180 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.180 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.180 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.180 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.181 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.181 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.181 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.181 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.181 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.181 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.181 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.182 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.182 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.182 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.182 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.182 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.182 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.182 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.183 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.183 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.183 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.183 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.183 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.183 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.183 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.184 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.184 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.184 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.184 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.184 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.184 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.184 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.185 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.185 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.185 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.185 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.185 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.185 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.186 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.186 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.186 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.186 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.186 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.186 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.186 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.187 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.187 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.187 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.187 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.187 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.187 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.187 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.188 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.188 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.188 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.188 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.188 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.188 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.188 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.189 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.189 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.189 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.189 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.189 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.189 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.190 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.190 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.190 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.190 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.190 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.190 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.os_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.190 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.191 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.191 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.191 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.191 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.191 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.191 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.191 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.192 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.192 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.192 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.192 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.192 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.192 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.192 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.193 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.193 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.193 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.193 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.193 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.193 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.193 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.194 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.194 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.194 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.194 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.194 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.194 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.194 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.194 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.195 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.195 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.195 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.195 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.195 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.195 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.195 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.196 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.196 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.196 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.196 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.196 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.196 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.196 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.197 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.197 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.197 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.197 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.197 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.197 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.197 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.198 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.198 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.198 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.198 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.198 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.198 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.198 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.199 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.199 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.199 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.199 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.199 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.199 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.200 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.200 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.200 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.200 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.200 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.201 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.201 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.201 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.201 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.201 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.201 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.201 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.202 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.202 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.202 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.202 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.202 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.202 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.202 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.203 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.203 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.203 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.203 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.203 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.203 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.204 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.204 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.204 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.204 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.204 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.204 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.204 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.205 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.205 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.205 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.205 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.205 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.205 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.205 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.206 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.206 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.206 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.206 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.206 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.206 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.206 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.207 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.207 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.207 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.207 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.207 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.207 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.207 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.208 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.208 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.208 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.208 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.208 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.208 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.208 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.209 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.209 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.209 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.209 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.209 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.209 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.209 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.210 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.210 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.210 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.210 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.210 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.211 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.211 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.211 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.211 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.211 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.211 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.211 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.212 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.212 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.212 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.212 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.212 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.212 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.212 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.213 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.213 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.213 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.213 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.213 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.213 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.213 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.214 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.214 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.214 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.214 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.214 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.214 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.214 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.215 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.215 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.215 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.215 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.215 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.215 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost python3.9[278896]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.215 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.216 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.barbican_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.216 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.216 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.216 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.216 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.216 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.217 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.217 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.217 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.217 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.217 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.217 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.217 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.218 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.218 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.218 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.218 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.218 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.218 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.218 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.219 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.219 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.219 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.219 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.219 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.219 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.220 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.220 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.220 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.220 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.220 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.220 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.220 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.221 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.221 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.221 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.221 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.221 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.221 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.222 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.222 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.222 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.222 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.222 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.222 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.223 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.223 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.223 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.223 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.223 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.223 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.223 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.224 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.224 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.224 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.224 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.224 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.224 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.225 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.225 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.225 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.225 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.225 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.225 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.225 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.226 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.226 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.226 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.226 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.226 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.226 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.227 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.227 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.227 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.227 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.227 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.227 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.227 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.228 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.228 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.228 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.228 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.228 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.228 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.228 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.229 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.229 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.229 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.229 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.229 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.230 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.230 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.230 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.230 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.230 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.230 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.230 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.231 278638 WARNING oslo_config.cfg [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 23 04:38:49 localhost nova_compute[278622]: live_migration_uri is deprecated for removal in favor of two other options that Feb 23 04:38:49 localhost nova_compute[278622]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 23 04:38:49 localhost nova_compute[278622]: and ``live_migration_inbound_addr`` respectively. Feb 23 04:38:49 localhost nova_compute[278622]: ). Its value may be silently ignored in the future.#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.231 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.231 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.231 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.231 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.231 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.232 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.232 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.232 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.232 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.232 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.232 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.232 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.233 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.233 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.233 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.233 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.233 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.233 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.234 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.rbd_secret_uuid = f1fea371-cb69-578d-a3d0-b5c472a84b46 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.234 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.234 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.234 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.234 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.234 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.234 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.235 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.235 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.235 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.235 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.235 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.235 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.236 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.236 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.236 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.236 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.236 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.236 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.236 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.237 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.237 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.237 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.237 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.237 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.237 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.237 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.238 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.238 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.238 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.238 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.238 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.238 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.239 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.239 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.239 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.239 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.239 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.239 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.239 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.240 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.240 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.240 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.240 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.240 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.240 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.240 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.240 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.241 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.241 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.241 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.241 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.241 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.241 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.241 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.242 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.242 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.242 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.242 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.242 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.242 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.242 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.243 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.243 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.243 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.243 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.243 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.243 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.243 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.244 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.244 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.244 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.244 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.244 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.244 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.244 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.245 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.245 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.245 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.245 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.245 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.245 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.245 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.246 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.246 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.246 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.246 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.246 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.246 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.246 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.247 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.247 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.247 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.247 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.247 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.247 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.247 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.248 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.248 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.248 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.248 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.248 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.248 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.248 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.249 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.249 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.249 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.249 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.249 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.249 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.249 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.250 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.250 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.250 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.250 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.250 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.250 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.250 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.251 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.251 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.251 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.251 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.251 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.251 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.252 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.252 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.252 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.252 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.252 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.252 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.252 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.253 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.253 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.253 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.253 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.253 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.253 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.253 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.254 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.254 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.254 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.254 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.254 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.254 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.254 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.255 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.255 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.255 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.255 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.255 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.255 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.255 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.256 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.256 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.256 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.256 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.256 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.256 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.256 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.257 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.257 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.257 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.257 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.257 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.257 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.258 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.258 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.258 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.258 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.258 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.258 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.258 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.259 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.259 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.259 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.259 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.259 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.259 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.259 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.260 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.260 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.260 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.260 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.260 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.260 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.260 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.261 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.261 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.261 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.261 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.261 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.261 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.261 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.262 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.262 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.262 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.262 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.262 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.262 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.262 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.263 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.263 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.263 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.263 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.263 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.263 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.263 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.264 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.264 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.264 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.264 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.264 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.264 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.264 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.264 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.265 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.265 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.265 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.265 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.265 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.265 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.266 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.266 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.266 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.266 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.266 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.266 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.266 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.267 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.267 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.267 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.267 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.267 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.267 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.268 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.268 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.268 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.268 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.268 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.268 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.269 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.269 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.269 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.269 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.269 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.269 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.270 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.270 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.270 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.270 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.270 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.270 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.271 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.271 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.271 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.271 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.271 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.271 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.272 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.272 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.272 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.272 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.272 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.273 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.273 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.273 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.273 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.273 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.274 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.274 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.274 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.274 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.275 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.275 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.275 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.275 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.275 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.276 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.276 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.276 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.276 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.277 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.277 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.277 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.277 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.277 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.278 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.278 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.278 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.278 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.278 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.279 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.279 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.279 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.279 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.280 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.280 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.280 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.280 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.280 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.281 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.281 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.281 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.281 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.281 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.282 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.282 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.282 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.282 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.283 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.283 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.283 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.283 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.283 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.284 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.284 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.284 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.284 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.284 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.285 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.285 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.285 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.285 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.286 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.286 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.286 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.286 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.286 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.287 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.287 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.287 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.287 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.287 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.288 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.288 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.288 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.288 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.288 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.289 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.289 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.289 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.289 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.290 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.290 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.290 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.290 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.291 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.291 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.291 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.291 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.291 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.292 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.292 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.292 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.292 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.292 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.293 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.293 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.293 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.293 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.293 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.294 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.294 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.294 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.294 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.295 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.295 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.295 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.295 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.295 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.296 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.296 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.296 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.296 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.296 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.297 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.297 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.297 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.297 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.298 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.298 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.298 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.298 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.298 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.299 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.299 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.299 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.299 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.299 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.300 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.300 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.300 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.300 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.300 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.301 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.301 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.301 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.301 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.302 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.302 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.302 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.302 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.303 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.303 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.303 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.304 278638 DEBUG oslo_service.service [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.305 278638 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.332 278638 INFO nova.virt.node [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Determined node identity 9df77b74-d7d6-46a8-93cb-cadec85557a4 from /var/lib/nova/compute_id#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.332 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.333 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.333 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.333 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.343 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.346 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.347 278638 INFO nova.virt.libvirt.driver [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Connection event '1' reason 'None'#033[00m Feb 23 04:38:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30783 DF PROTO=TCP SPT=38026 DPT=9102 SEQ=290784186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012CC9030000000001030307) Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.352 278638 INFO nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Libvirt host capabilities Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 8bb105a9-4892-4676-ace9-e931084902e3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: x86_64 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v4 Feb 23 04:38:49 localhost nova_compute[278622]: AMD Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: tcp Feb 23 04:38:49 localhost nova_compute[278622]: rdma Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 16116612 Feb 23 04:38:49 localhost nova_compute[278622]: 4029153 Feb 23 04:38:49 localhost nova_compute[278622]: 0 Feb 23 04:38:49 localhost nova_compute[278622]: 0 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: selinux Feb 23 04:38:49 localhost nova_compute[278622]: 0 Feb 23 04:38:49 localhost nova_compute[278622]: system_u:system_r:svirt_t:s0 Feb 23 04:38:49 localhost nova_compute[278622]: system_u:system_r:svirt_tcg_t:s0 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: dac Feb 23 04:38:49 localhost nova_compute[278622]: 0 Feb 23 04:38:49 localhost nova_compute[278622]: +107:+107 Feb 23 04:38:49 localhost nova_compute[278622]: +107:+107 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: hvm Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 32 Feb 23 04:38:49 localhost nova_compute[278622]: /usr/libexec/qemu-kvm Feb 23 04:38:49 localhost nova_compute[278622]: pc-i440fx-rhel7.6.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.8.0 Feb 23 04:38:49 localhost nova_compute[278622]: q35 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.6.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.6.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.4.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.5.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.3.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel7.6.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.4.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.2.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.2.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.0.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.0.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.1.0 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: hvm Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 64 Feb 23 04:38:49 localhost nova_compute[278622]: /usr/libexec/qemu-kvm Feb 23 04:38:49 localhost nova_compute[278622]: pc-i440fx-rhel7.6.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.8.0 Feb 23 04:38:49 localhost nova_compute[278622]: q35 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.6.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.6.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.4.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.5.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.3.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel7.6.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.4.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.2.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.2.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.0.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.0.0 Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel8.1.0 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: #033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.359 278638 DEBUG nova.virt.libvirt.volume.mount [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.364 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.368 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: /usr/libexec/qemu-kvm Feb 23 04:38:49 localhost nova_compute[278622]: kvm Feb 23 04:38:49 localhost nova_compute[278622]: pc-i440fx-rhel7.6.0 Feb 23 04:38:49 localhost nova_compute[278622]: i686 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: rom Feb 23 04:38:49 localhost nova_compute[278622]: pflash Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: yes Feb 23 04:38:49 localhost nova_compute[278622]: no Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: no Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome Feb 23 04:38:49 localhost nova_compute[278622]: AMD Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 486 Feb 23 04:38:49 localhost nova_compute[278622]: 486-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: ClearwaterForest Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: ClearwaterForest-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Conroe Feb 23 04:38:49 localhost nova_compute[278622]: Conroe-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-IBPB Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v4 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v5 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Turin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Turin-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v1 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v2 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v6 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v7 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: KnightsMill Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: KnightsMill-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G1-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G2 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G2-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G3 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G3-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G4-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G5-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Penryn Feb 23 04:38:49 localhost nova_compute[278622]: Penryn-v1 Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Westmere Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-v2 Feb 23 04:38:49 localhost nova_compute[278622]: athlon Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: athlon-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: core2duo Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: core2duo-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: coreduo Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: coreduo-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: kvm32 Feb 23 04:38:49 localhost nova_compute[278622]: kvm32-v1 Feb 23 04:38:49 localhost nova_compute[278622]: kvm64 Feb 23 04:38:49 localhost nova_compute[278622]: kvm64-v1 Feb 23 04:38:49 localhost nova_compute[278622]: n270 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: n270-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: pentium Feb 23 04:38:49 localhost nova_compute[278622]: pentium-v1 Feb 23 04:38:49 localhost nova_compute[278622]: pentium2 Feb 23 04:38:49 localhost nova_compute[278622]: pentium2-v1 Feb 23 04:38:49 localhost nova_compute[278622]: pentium3 Feb 23 04:38:49 localhost nova_compute[278622]: pentium3-v1 Feb 23 04:38:49 localhost nova_compute[278622]: phenom Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: phenom-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: qemu32 Feb 23 04:38:49 localhost nova_compute[278622]: qemu32-v1 Feb 23 04:38:49 localhost nova_compute[278622]: qemu64 Feb 23 04:38:49 localhost nova_compute[278622]: qemu64-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: file Feb 23 04:38:49 localhost nova_compute[278622]: anonymous Feb 23 04:38:49 localhost nova_compute[278622]: memfd Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: disk Feb 23 04:38:49 localhost nova_compute[278622]: cdrom Feb 23 04:38:49 localhost nova_compute[278622]: floppy Feb 23 04:38:49 localhost nova_compute[278622]: lun Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: ide Feb 23 04:38:49 localhost nova_compute[278622]: fdc Feb 23 04:38:49 localhost nova_compute[278622]: scsi Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: sata Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: virtio-transitional Feb 23 04:38:49 localhost nova_compute[278622]: virtio-non-transitional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: vnc Feb 23 04:38:49 localhost nova_compute[278622]: egl-headless Feb 23 04:38:49 localhost nova_compute[278622]: dbus Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: subsystem Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: default Feb 23 04:38:49 localhost nova_compute[278622]: mandatory Feb 23 04:38:49 localhost nova_compute[278622]: requisite Feb 23 04:38:49 localhost nova_compute[278622]: optional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: pci Feb 23 04:38:49 localhost nova_compute[278622]: scsi Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: virtio-transitional Feb 23 04:38:49 localhost nova_compute[278622]: virtio-non-transitional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: random Feb 23 04:38:49 localhost nova_compute[278622]: egd Feb 23 04:38:49 localhost nova_compute[278622]: builtin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: path Feb 23 04:38:49 localhost nova_compute[278622]: handle Feb 23 04:38:49 localhost nova_compute[278622]: virtiofs Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: tpm-tis Feb 23 04:38:49 localhost nova_compute[278622]: tpm-crb Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: emulator Feb 23 04:38:49 localhost nova_compute[278622]: external Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 2.0 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: pty Feb 23 04:38:49 localhost nova_compute[278622]: unix Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: qemu Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: builtin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: default Feb 23 04:38:49 localhost nova_compute[278622]: passt Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: isa Feb 23 04:38:49 localhost nova_compute[278622]: hyperv Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: null Feb 23 04:38:49 localhost nova_compute[278622]: vc Feb 23 04:38:49 localhost nova_compute[278622]: pty Feb 23 04:38:49 localhost nova_compute[278622]: dev Feb 23 04:38:49 localhost nova_compute[278622]: file Feb 23 04:38:49 localhost nova_compute[278622]: pipe Feb 23 04:38:49 localhost nova_compute[278622]: stdio Feb 23 04:38:49 localhost nova_compute[278622]: udp Feb 23 04:38:49 localhost nova_compute[278622]: tcp Feb 23 04:38:49 localhost nova_compute[278622]: unix Feb 23 04:38:49 localhost nova_compute[278622]: qemu-vdagent Feb 23 04:38:49 localhost nova_compute[278622]: dbus Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: relaxed Feb 23 04:38:49 localhost nova_compute[278622]: vapic Feb 23 04:38:49 localhost nova_compute[278622]: spinlocks Feb 23 04:38:49 localhost nova_compute[278622]: vpindex Feb 23 04:38:49 localhost nova_compute[278622]: runtime Feb 23 04:38:49 localhost nova_compute[278622]: synic Feb 23 04:38:49 localhost nova_compute[278622]: stimer Feb 23 04:38:49 localhost nova_compute[278622]: reset Feb 23 04:38:49 localhost nova_compute[278622]: vendor_id Feb 23 04:38:49 localhost nova_compute[278622]: frequencies Feb 23 04:38:49 localhost nova_compute[278622]: reenlightenment Feb 23 04:38:49 localhost nova_compute[278622]: tlbflush Feb 23 04:38:49 localhost nova_compute[278622]: ipi Feb 23 04:38:49 localhost nova_compute[278622]: avic Feb 23 04:38:49 localhost nova_compute[278622]: emsr_bitmap Feb 23 04:38:49 localhost nova_compute[278622]: xmm_input Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 4095 Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Linux KVM Hv Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.384 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: /usr/libexec/qemu-kvm Feb 23 04:38:49 localhost nova_compute[278622]: kvm Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.8.0 Feb 23 04:38:49 localhost nova_compute[278622]: i686 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: rom Feb 23 04:38:49 localhost nova_compute[278622]: pflash Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: yes Feb 23 04:38:49 localhost nova_compute[278622]: no Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: no Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome Feb 23 04:38:49 localhost nova_compute[278622]: AMD Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 486 Feb 23 04:38:49 localhost nova_compute[278622]: 486-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: ClearwaterForest Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: ClearwaterForest-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Conroe Feb 23 04:38:49 localhost nova_compute[278622]: Conroe-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-IBPB Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v4 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v5 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Turin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Turin-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v1 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v2 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v6 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v7 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: KnightsMill Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: KnightsMill-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G1-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G2 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G2-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G3 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G3-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G4-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G5-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Penryn Feb 23 04:38:49 localhost nova_compute[278622]: Penryn-v1 Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Westmere Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-v2 Feb 23 04:38:49 localhost nova_compute[278622]: athlon Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: athlon-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: core2duo Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: core2duo-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: coreduo Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: coreduo-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: kvm32 Feb 23 04:38:49 localhost nova_compute[278622]: kvm32-v1 Feb 23 04:38:49 localhost nova_compute[278622]: kvm64 Feb 23 04:38:49 localhost nova_compute[278622]: kvm64-v1 Feb 23 04:38:49 localhost nova_compute[278622]: n270 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: n270-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: pentium Feb 23 04:38:49 localhost nova_compute[278622]: pentium-v1 Feb 23 04:38:49 localhost nova_compute[278622]: pentium2 Feb 23 04:38:49 localhost nova_compute[278622]: pentium2-v1 Feb 23 04:38:49 localhost nova_compute[278622]: pentium3 Feb 23 04:38:49 localhost nova_compute[278622]: pentium3-v1 Feb 23 04:38:49 localhost nova_compute[278622]: phenom Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: phenom-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: qemu32 Feb 23 04:38:49 localhost nova_compute[278622]: qemu32-v1 Feb 23 04:38:49 localhost nova_compute[278622]: qemu64 Feb 23 04:38:49 localhost nova_compute[278622]: qemu64-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: file Feb 23 04:38:49 localhost nova_compute[278622]: anonymous Feb 23 04:38:49 localhost nova_compute[278622]: memfd Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: disk Feb 23 04:38:49 localhost nova_compute[278622]: cdrom Feb 23 04:38:49 localhost nova_compute[278622]: floppy Feb 23 04:38:49 localhost nova_compute[278622]: lun Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: fdc Feb 23 04:38:49 localhost nova_compute[278622]: scsi Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: sata Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: virtio-transitional Feb 23 04:38:49 localhost nova_compute[278622]: virtio-non-transitional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: vnc Feb 23 04:38:49 localhost nova_compute[278622]: egl-headless Feb 23 04:38:49 localhost nova_compute[278622]: dbus Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: subsystem Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: default Feb 23 04:38:49 localhost nova_compute[278622]: mandatory Feb 23 04:38:49 localhost nova_compute[278622]: requisite Feb 23 04:38:49 localhost nova_compute[278622]: optional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: pci Feb 23 04:38:49 localhost nova_compute[278622]: scsi Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: virtio-transitional Feb 23 04:38:49 localhost nova_compute[278622]: virtio-non-transitional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: random Feb 23 04:38:49 localhost nova_compute[278622]: egd Feb 23 04:38:49 localhost nova_compute[278622]: builtin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: path Feb 23 04:38:49 localhost nova_compute[278622]: handle Feb 23 04:38:49 localhost nova_compute[278622]: virtiofs Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: tpm-tis Feb 23 04:38:49 localhost nova_compute[278622]: tpm-crb Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: emulator Feb 23 04:38:49 localhost nova_compute[278622]: external Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 2.0 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: pty Feb 23 04:38:49 localhost nova_compute[278622]: unix Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: qemu Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: builtin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: default Feb 23 04:38:49 localhost nova_compute[278622]: passt Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: isa Feb 23 04:38:49 localhost nova_compute[278622]: hyperv Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: null Feb 23 04:38:49 localhost nova_compute[278622]: vc Feb 23 04:38:49 localhost nova_compute[278622]: pty Feb 23 04:38:49 localhost nova_compute[278622]: dev Feb 23 04:38:49 localhost nova_compute[278622]: file Feb 23 04:38:49 localhost nova_compute[278622]: pipe Feb 23 04:38:49 localhost nova_compute[278622]: stdio Feb 23 04:38:49 localhost nova_compute[278622]: udp Feb 23 04:38:49 localhost nova_compute[278622]: tcp Feb 23 04:38:49 localhost nova_compute[278622]: unix Feb 23 04:38:49 localhost nova_compute[278622]: qemu-vdagent Feb 23 04:38:49 localhost nova_compute[278622]: dbus Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: relaxed Feb 23 04:38:49 localhost nova_compute[278622]: vapic Feb 23 04:38:49 localhost nova_compute[278622]: spinlocks Feb 23 04:38:49 localhost nova_compute[278622]: vpindex Feb 23 04:38:49 localhost nova_compute[278622]: runtime Feb 23 04:38:49 localhost nova_compute[278622]: synic Feb 23 04:38:49 localhost nova_compute[278622]: stimer Feb 23 04:38:49 localhost nova_compute[278622]: reset Feb 23 04:38:49 localhost nova_compute[278622]: vendor_id Feb 23 04:38:49 localhost nova_compute[278622]: frequencies Feb 23 04:38:49 localhost nova_compute[278622]: reenlightenment Feb 23 04:38:49 localhost nova_compute[278622]: tlbflush Feb 23 04:38:49 localhost nova_compute[278622]: ipi Feb 23 04:38:49 localhost nova_compute[278622]: avic Feb 23 04:38:49 localhost nova_compute[278622]: emsr_bitmap Feb 23 04:38:49 localhost nova_compute[278622]: xmm_input Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 4095 Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Linux KVM Hv Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.419 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.423 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: /usr/libexec/qemu-kvm Feb 23 04:38:49 localhost nova_compute[278622]: kvm Feb 23 04:38:49 localhost nova_compute[278622]: pc-i440fx-rhel7.6.0 Feb 23 04:38:49 localhost nova_compute[278622]: x86_64 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: rom Feb 23 04:38:49 localhost nova_compute[278622]: pflash Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: yes Feb 23 04:38:49 localhost nova_compute[278622]: no Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: no Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome Feb 23 04:38:49 localhost nova_compute[278622]: AMD Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 486 Feb 23 04:38:49 localhost nova_compute[278622]: 486-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: ClearwaterForest Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: ClearwaterForest-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Conroe Feb 23 04:38:49 localhost nova_compute[278622]: Conroe-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-IBPB Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v4 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v5 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Turin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Turin-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v1 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v2 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v6 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v7 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: KnightsMill Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: KnightsMill-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G1-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G2 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G2-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G3 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G3-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G4-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G5-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Penryn Feb 23 04:38:49 localhost nova_compute[278622]: Penryn-v1 Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Westmere Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-v2 Feb 23 04:38:49 localhost nova_compute[278622]: athlon Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: athlon-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: core2duo Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: core2duo-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: coreduo Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: coreduo-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: kvm32 Feb 23 04:38:49 localhost nova_compute[278622]: kvm32-v1 Feb 23 04:38:49 localhost nova_compute[278622]: kvm64 Feb 23 04:38:49 localhost nova_compute[278622]: kvm64-v1 Feb 23 04:38:49 localhost nova_compute[278622]: n270 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: n270-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: pentium Feb 23 04:38:49 localhost nova_compute[278622]: pentium-v1 Feb 23 04:38:49 localhost nova_compute[278622]: pentium2 Feb 23 04:38:49 localhost nova_compute[278622]: pentium2-v1 Feb 23 04:38:49 localhost nova_compute[278622]: pentium3 Feb 23 04:38:49 localhost nova_compute[278622]: pentium3-v1 Feb 23 04:38:49 localhost nova_compute[278622]: phenom Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: phenom-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: qemu32 Feb 23 04:38:49 localhost nova_compute[278622]: qemu32-v1 Feb 23 04:38:49 localhost nova_compute[278622]: qemu64 Feb 23 04:38:49 localhost nova_compute[278622]: qemu64-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: file Feb 23 04:38:49 localhost nova_compute[278622]: anonymous Feb 23 04:38:49 localhost nova_compute[278622]: memfd Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: disk Feb 23 04:38:49 localhost nova_compute[278622]: cdrom Feb 23 04:38:49 localhost nova_compute[278622]: floppy Feb 23 04:38:49 localhost nova_compute[278622]: lun Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: ide Feb 23 04:38:49 localhost nova_compute[278622]: fdc Feb 23 04:38:49 localhost nova_compute[278622]: scsi Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: sata Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: virtio-transitional Feb 23 04:38:49 localhost nova_compute[278622]: virtio-non-transitional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: vnc Feb 23 04:38:49 localhost nova_compute[278622]: egl-headless Feb 23 04:38:49 localhost nova_compute[278622]: dbus Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: subsystem Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: default Feb 23 04:38:49 localhost nova_compute[278622]: mandatory Feb 23 04:38:49 localhost nova_compute[278622]: requisite Feb 23 04:38:49 localhost nova_compute[278622]: optional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: pci Feb 23 04:38:49 localhost nova_compute[278622]: scsi Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: virtio-transitional Feb 23 04:38:49 localhost nova_compute[278622]: virtio-non-transitional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: random Feb 23 04:38:49 localhost nova_compute[278622]: egd Feb 23 04:38:49 localhost nova_compute[278622]: builtin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: path Feb 23 04:38:49 localhost nova_compute[278622]: handle Feb 23 04:38:49 localhost nova_compute[278622]: virtiofs Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: tpm-tis Feb 23 04:38:49 localhost nova_compute[278622]: tpm-crb Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: emulator Feb 23 04:38:49 localhost nova_compute[278622]: external Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 2.0 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: pty Feb 23 04:38:49 localhost nova_compute[278622]: unix Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: qemu Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: builtin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: default Feb 23 04:38:49 localhost nova_compute[278622]: passt Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: isa Feb 23 04:38:49 localhost nova_compute[278622]: hyperv Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: null Feb 23 04:38:49 localhost nova_compute[278622]: vc Feb 23 04:38:49 localhost nova_compute[278622]: pty Feb 23 04:38:49 localhost nova_compute[278622]: dev Feb 23 04:38:49 localhost nova_compute[278622]: file Feb 23 04:38:49 localhost nova_compute[278622]: pipe Feb 23 04:38:49 localhost nova_compute[278622]: stdio Feb 23 04:38:49 localhost nova_compute[278622]: udp Feb 23 04:38:49 localhost nova_compute[278622]: tcp Feb 23 04:38:49 localhost nova_compute[278622]: unix Feb 23 04:38:49 localhost nova_compute[278622]: qemu-vdagent Feb 23 04:38:49 localhost nova_compute[278622]: dbus Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: relaxed Feb 23 04:38:49 localhost nova_compute[278622]: vapic Feb 23 04:38:49 localhost nova_compute[278622]: spinlocks Feb 23 04:38:49 localhost nova_compute[278622]: vpindex Feb 23 04:38:49 localhost nova_compute[278622]: runtime Feb 23 04:38:49 localhost nova_compute[278622]: synic Feb 23 04:38:49 localhost nova_compute[278622]: stimer Feb 23 04:38:49 localhost nova_compute[278622]: reset Feb 23 04:38:49 localhost nova_compute[278622]: vendor_id Feb 23 04:38:49 localhost nova_compute[278622]: frequencies Feb 23 04:38:49 localhost nova_compute[278622]: reenlightenment Feb 23 04:38:49 localhost nova_compute[278622]: tlbflush Feb 23 04:38:49 localhost nova_compute[278622]: ipi Feb 23 04:38:49 localhost nova_compute[278622]: avic Feb 23 04:38:49 localhost nova_compute[278622]: emsr_bitmap Feb 23 04:38:49 localhost nova_compute[278622]: xmm_input Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 4095 Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Linux KVM Hv Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.488 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: /usr/libexec/qemu-kvm Feb 23 04:38:49 localhost nova_compute[278622]: kvm Feb 23 04:38:49 localhost nova_compute[278622]: pc-q35-rhel9.8.0 Feb 23 04:38:49 localhost nova_compute[278622]: x86_64 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: efi Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 23 04:38:49 localhost nova_compute[278622]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 23 04:38:49 localhost nova_compute[278622]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 23 04:38:49 localhost nova_compute[278622]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: rom Feb 23 04:38:49 localhost nova_compute[278622]: pflash Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: yes Feb 23 04:38:49 localhost nova_compute[278622]: no Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: yes Feb 23 04:38:49 localhost nova_compute[278622]: no Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome Feb 23 04:38:49 localhost nova_compute[278622]: AMD Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 486 Feb 23 04:38:49 localhost nova_compute[278622]: 486-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Broadwell-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cascadelake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: ClearwaterForest Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: ClearwaterForest-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Conroe Feb 23 04:38:49 localhost nova_compute[278622]: Conroe-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Cooperlake-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Denverton-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Dhyana-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Genoa-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-IBPB Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Milan-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v4 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Rome-v5 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Turin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-Turin-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v1 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v2 Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: EPYC-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: GraniteRapids-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Haswell-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-noTSX Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v6 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Icelake-Server-v7 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: IvyBridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: KnightsMill Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: KnightsMill-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Nehalem-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G1-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G2 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G2-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G3 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G3-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G4-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Opteron_G5-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Penryn Feb 23 04:38:49 localhost nova_compute[278622]: Penryn-v1 Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: SandyBridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SapphireRapids-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: SierraForest-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Client-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-noTSX-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Skylake-Server-v5 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v2 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v3 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Snowridge-v4 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Westmere Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-IBRS Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Westmere-v2 Feb 23 04:38:49 localhost nova_compute[278622]: athlon Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: athlon-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: core2duo Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: core2duo-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: coreduo Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: coreduo-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: kvm32 Feb 23 04:38:49 localhost nova_compute[278622]: kvm32-v1 Feb 23 04:38:49 localhost nova_compute[278622]: kvm64 Feb 23 04:38:49 localhost nova_compute[278622]: kvm64-v1 Feb 23 04:38:49 localhost nova_compute[278622]: n270 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: n270-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: pentium Feb 23 04:38:49 localhost nova_compute[278622]: pentium-v1 Feb 23 04:38:49 localhost nova_compute[278622]: pentium2 Feb 23 04:38:49 localhost nova_compute[278622]: pentium2-v1 Feb 23 04:38:49 localhost nova_compute[278622]: pentium3 Feb 23 04:38:49 localhost nova_compute[278622]: pentium3-v1 Feb 23 04:38:49 localhost nova_compute[278622]: phenom Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: phenom-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: qemu32 Feb 23 04:38:49 localhost nova_compute[278622]: qemu32-v1 Feb 23 04:38:49 localhost nova_compute[278622]: qemu64 Feb 23 04:38:49 localhost nova_compute[278622]: qemu64-v1 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: file Feb 23 04:38:49 localhost nova_compute[278622]: anonymous Feb 23 04:38:49 localhost nova_compute[278622]: memfd Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: disk Feb 23 04:38:49 localhost nova_compute[278622]: cdrom Feb 23 04:38:49 localhost nova_compute[278622]: floppy Feb 23 04:38:49 localhost nova_compute[278622]: lun Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: fdc Feb 23 04:38:49 localhost nova_compute[278622]: scsi Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: sata Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: virtio-transitional Feb 23 04:38:49 localhost nova_compute[278622]: virtio-non-transitional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: vnc Feb 23 04:38:49 localhost nova_compute[278622]: egl-headless Feb 23 04:38:49 localhost nova_compute[278622]: dbus Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: subsystem Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: default Feb 23 04:38:49 localhost nova_compute[278622]: mandatory Feb 23 04:38:49 localhost nova_compute[278622]: requisite Feb 23 04:38:49 localhost nova_compute[278622]: optional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: pci Feb 23 04:38:49 localhost nova_compute[278622]: scsi Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: virtio Feb 23 04:38:49 localhost nova_compute[278622]: virtio-transitional Feb 23 04:38:49 localhost nova_compute[278622]: virtio-non-transitional Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: random Feb 23 04:38:49 localhost nova_compute[278622]: egd Feb 23 04:38:49 localhost nova_compute[278622]: builtin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: path Feb 23 04:38:49 localhost nova_compute[278622]: handle Feb 23 04:38:49 localhost nova_compute[278622]: virtiofs Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: tpm-tis Feb 23 04:38:49 localhost nova_compute[278622]: tpm-crb Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: emulator Feb 23 04:38:49 localhost nova_compute[278622]: external Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 2.0 Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: usb Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: pty Feb 23 04:38:49 localhost nova_compute[278622]: unix Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: qemu Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: builtin Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: default Feb 23 04:38:49 localhost nova_compute[278622]: passt Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: isa Feb 23 04:38:49 localhost nova_compute[278622]: hyperv Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: null Feb 23 04:38:49 localhost nova_compute[278622]: vc Feb 23 04:38:49 localhost nova_compute[278622]: pty Feb 23 04:38:49 localhost nova_compute[278622]: dev Feb 23 04:38:49 localhost nova_compute[278622]: file Feb 23 04:38:49 localhost nova_compute[278622]: pipe Feb 23 04:38:49 localhost nova_compute[278622]: stdio Feb 23 04:38:49 localhost nova_compute[278622]: udp Feb 23 04:38:49 localhost nova_compute[278622]: tcp Feb 23 04:38:49 localhost nova_compute[278622]: unix Feb 23 04:38:49 localhost nova_compute[278622]: qemu-vdagent Feb 23 04:38:49 localhost nova_compute[278622]: dbus Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: relaxed Feb 23 04:38:49 localhost nova_compute[278622]: vapic Feb 23 04:38:49 localhost nova_compute[278622]: spinlocks Feb 23 04:38:49 localhost nova_compute[278622]: vpindex Feb 23 04:38:49 localhost nova_compute[278622]: runtime Feb 23 04:38:49 localhost nova_compute[278622]: synic Feb 23 04:38:49 localhost nova_compute[278622]: stimer Feb 23 04:38:49 localhost nova_compute[278622]: reset Feb 23 04:38:49 localhost nova_compute[278622]: vendor_id Feb 23 04:38:49 localhost nova_compute[278622]: frequencies Feb 23 04:38:49 localhost nova_compute[278622]: reenlightenment Feb 23 04:38:49 localhost nova_compute[278622]: tlbflush Feb 23 04:38:49 localhost nova_compute[278622]: ipi Feb 23 04:38:49 localhost nova_compute[278622]: avic Feb 23 04:38:49 localhost nova_compute[278622]: emsr_bitmap Feb 23 04:38:49 localhost nova_compute[278622]: xmm_input Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: 4095 Feb 23 04:38:49 localhost nova_compute[278622]: on Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: off Feb 23 04:38:49 localhost nova_compute[278622]: Linux KVM Hv Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: Feb 23 04:38:49 localhost nova_compute[278622]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.548 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.548 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.552 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.552 278638 INFO nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Secure Boot support detected#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.554 278638 INFO nova.virt.libvirt.driver [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.555 278638 INFO nova.virt.libvirt.driver [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.564 278638 DEBUG nova.virt.libvirt.driver [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.584 278638 INFO nova.virt.node [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Determined node identity 9df77b74-d7d6-46a8-93cb-cadec85557a4 from /var/lib/nova/compute_id#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.604 278638 DEBUG nova.compute.manager [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Verified node 9df77b74-d7d6-46a8-93cb-cadec85557a4 matches my host np0005626465.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Feb 23 04:38:49 localhost python3.9[278972]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.639 278638 INFO nova.compute.manager [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.703 278638 DEBUG oslo_concurrency.lockutils [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.703 278638 DEBUG oslo_concurrency.lockutils [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.704 278638 DEBUG oslo_concurrency.lockutils [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.704 278638 DEBUG nova.compute.resource_tracker [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:38:49 localhost nova_compute[278622]: 2026-02-23 09:38:49.704 278638 DEBUG oslo_concurrency.processutils [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:38:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:38:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:38:50 localhost podman[278993]: 2026-02-23 09:38:50.018058948 +0000 UTC m=+0.090711622 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:38:50 localhost podman[278993]: 2026-02-23 09:38:50.026748234 +0000 UTC m=+0.099400898 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS) Feb 23 04:38:50 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:38:50 localhost podman[278994]: 2026-02-23 09:38:50.114225687 +0000 UTC m=+0.187342456 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.133 278638 DEBUG oslo_concurrency.processutils [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:38:50 localhost podman[278994]: 2026-02-23 09:38:50.151948571 +0000 UTC m=+0.225065320 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:38:50 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.280 278638 WARNING nova.virt.libvirt.driver [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.281 278638 DEBUG nova.compute.resource_tracker [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12891MB free_disk=41.83688735961914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.281 278638 DEBUG oslo_concurrency.lockutils [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.281 278638 DEBUG oslo_concurrency.lockutils [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.361 278638 DEBUG nova.compute.resource_tracker [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.361 278638 DEBUG nova.compute.resource_tracker [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.402 278638 DEBUG nova.scheduler.client.report [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Refreshing inventories for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.460 278638 DEBUG nova.scheduler.client.report [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Updating ProviderTree inventory for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.460 278638 DEBUG nova.compute.provider_tree [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.477 278638 DEBUG nova.scheduler.client.report [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Refreshing aggregate associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.507 278638 DEBUG nova.scheduler.client.report [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Refreshing trait associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.525 278638 DEBUG oslo_concurrency.processutils [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:38:50 localhost python3.9[279138]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771839530.1588347-3313-69904604897290/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.971 278638 DEBUG oslo_concurrency.processutils [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.977 278638 DEBUG nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Feb 23 04:38:50 localhost nova_compute[278622]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.977 278638 INFO nova.virt.libvirt.host [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] kernel doesn't support AMD SEV#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.978 278638 DEBUG nova.compute.provider_tree [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:38:50 localhost nova_compute[278622]: 2026-02-23 09:38:50.979 278638 DEBUG nova.virt.libvirt.driver [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 23 04:38:51 localhost nova_compute[278622]: 2026-02-23 09:38:51.002 278638 DEBUG nova.scheduler.client.report [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:38:51 localhost nova_compute[278622]: 2026-02-23 09:38:51.036 278638 DEBUG nova.compute.resource_tracker [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:38:51 localhost nova_compute[278622]: 2026-02-23 09:38:51.037 278638 DEBUG oslo_concurrency.lockutils [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:38:51 localhost nova_compute[278622]: 2026-02-23 09:38:51.037 278638 DEBUG nova.service [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Feb 23 04:38:51 localhost nova_compute[278622]: 2026-02-23 09:38:51.065 278638 DEBUG nova.service [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Feb 23 04:38:51 localhost nova_compute[278622]: 2026-02-23 09:38:51.066 278638 DEBUG nova.servicegroup.drivers.db [None req-c205be3f-3124-4e9f-b2fe-289fafbbd759 - - - - - -] DB_Driver: join new ServiceGroup member np0005626465.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Feb 23 04:38:51 localhost python3.9[279215]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:38:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:38:53 localhost podman[279235]: 2026-02-23 09:38:53.014484553 +0000 UTC m=+0.089270492 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:38:53 localhost podman[279235]: 2026-02-23 09:38:53.026725267 +0000 UTC m=+0.101511236 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:38:53 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:38:53 localhost python3.9[279348]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 23 04:38:55 localhost python3.9[279458]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 23 04:38:55 localhost python3.9[279548]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771839534.3755677-3436-214224264867280/.source.yaml _original_basename=.t9m3e4t6 follow=False checksum=4185f12b535f7417c8eab31aeeb8094a78600762 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.087 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.088 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:38:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:38:56 localhost python3.9[279656]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:38:57 localhost python3.9[279764]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:38:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30784 DF PROTO=TCP SPT=38026 DPT=9102 SEQ=290784186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012CE9830000000001030307) Feb 23 04:38:58 localhost python3.9[279872]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 23 04:38:59 localhost python3.9[279982]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 23 04:38:59 localhost systemd-journald[48305]: Field hash table of /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal has a fill level at 120.4 (401 of 333 items), suggesting rotation. Feb 23 04:38:59 localhost systemd-journald[48305]: /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 04:38:59 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 04:38:59 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 04:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:39:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5036 writes, 22K keys, 5036 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5036 writes, 634 syncs, 7.94 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 04:39:01 localhost python3.9[280115]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 23 04:39:01 localhost systemd[1]: Stopping nova_compute container... Feb 23 04:39:01 localhost openstack_network_exporter[243519]: ERROR 09:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:39:01 localhost openstack_network_exporter[243519]: Feb 23 04:39:01 localhost openstack_network_exporter[243519]: ERROR 09:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:39:01 localhost openstack_network_exporter[243519]: Feb 23 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:39:03 localhost podman[280132]: 2026-02-23 09:39:03.003616378 +0000 UTC m=+0.081850093 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:39:03 localhost podman[280132]: 2026-02-23 09:39:03.009908672 +0000 UTC m=+0.088142327 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:39:03 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:39:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:39:03 localhost systemd[1]: tmp-crun.C0bcTw.mount: Deactivated successfully. Feb 23 04:39:03 localhost podman[280173]: 2026-02-23 09:39:03.6905798 +0000 UTC m=+0.086688803 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 04:39:03 localhost podman[280173]: 2026-02-23 09:39:03.703997963 +0000 UTC m=+0.100106956 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.7, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, vcs-type=git, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:39:03 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:39:04 localhost nova_compute[278622]: 2026-02-23 09:39:04.281 278638 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Feb 23 04:39:04 localhost nova_compute[278622]: 2026-02-23 09:39:04.283 278638 DEBUG oslo_concurrency.lockutils [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:39:04 localhost nova_compute[278622]: 2026-02-23 09:39:04.283 278638 DEBUG oslo_concurrency.lockutils [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:39:04 localhost nova_compute[278622]: 2026-02-23 09:39:04.283 278638 DEBUG oslo_concurrency.lockutils [None req-58197c75-6ddd-4e1e-abc1-b87b05fa4c7a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:39:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5650 writes, 24K keys, 5650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5650 writes, 811 syncs, 6.97 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 04:39:04 localhost journal[228928]: End of file while reading data: Input/output error Feb 23 04:39:04 localhost systemd[1]: libpod-bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d.scope: Deactivated successfully. Feb 23 04:39:04 localhost systemd[1]: libpod-bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d.scope: Consumed 3.698s CPU time. Feb 23 04:39:04 localhost podman[280119]: 2026-02-23 09:39:04.687802534 +0000 UTC m=+3.292618778 container died bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, container_name=nova_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:39:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d-userdata-shm.mount: Deactivated successfully. Feb 23 04:39:04 localhost systemd[1]: var-lib-containers-storage-overlay-ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c-merged.mount: Deactivated successfully. Feb 23 04:39:04 localhost podman[280119]: 2026-02-23 09:39:04.760711191 +0000 UTC m=+3.365527415 container cleanup bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:39:04 localhost podman[280119]: nova_compute Feb 23 04:39:04 localhost podman[280273]: 2026-02-23 09:39:04.789712055 +0000 UTC m=+0.085247108 container cleanup bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, container_name=nova_compute, managed_by=edpm_ansible) Feb 23 04:39:04 localhost systemd[1]: libpod-conmon-bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d.scope: Deactivated successfully. Feb 23 04:39:04 localhost podman[280306]: error opening file `/run/crun/bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d/status`: No such file or directory Feb 23 04:39:04 localhost podman[280294]: 2026-02-23 09:39:04.862006374 +0000 UTC m=+0.042693548 container cleanup bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:39:04 localhost podman[280294]: nova_compute Feb 23 04:39:04 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Feb 23 04:39:04 localhost systemd[1]: Stopped nova_compute container. Feb 23 04:39:04 localhost systemd[1]: Starting nova_compute container... Feb 23 04:39:04 localhost systemd[1]: Started libcrun container. Feb 23 04:39:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 23 04:39:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 23 04:39:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 23 04:39:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 23 04:39:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5e6a64682fef67a5b95faa2c737e0d834749a84d92d4cd119cef0a77417e3c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 23 04:39:04 localhost podman[280308]: 2026-02-23 09:39:04.984145907 +0000 UTC m=+0.093647146 container init bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 04:39:04 localhost podman[280308]: 2026-02-23 09:39:04.989177413 +0000 UTC m=+0.098678652 container start bb61efde9b16b4d4e86e848c48713785c212b4b27e9b2e74b52cab54af525d3d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-9420ebc64adcbe31526e40e2b0b78b2e1b7d41ddf41dd4e3243f19db7c97ac09-39bdf96219731fcdb12ece2c9e8f4c9cac67a793472c97da56d1693a5ee4a93a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=nova_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:39:04 localhost podman[280308]: nova_compute Feb 23 04:39:04 localhost nova_compute[280321]: + sudo -E kolla_set_configs Feb 23 04:39:04 localhost systemd[1]: Started nova_compute container. Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Validating config file Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying service configuration files Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Deleting /etc/ceph Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Creating directory /etc/ceph Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/ceph Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Writing out command to execute Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 23 04:39:05 localhost nova_compute[280321]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 23 04:39:05 localhost nova_compute[280321]: ++ cat /run_command Feb 23 04:39:05 localhost nova_compute[280321]: + CMD=nova-compute Feb 23 04:39:05 localhost nova_compute[280321]: + ARGS= Feb 23 04:39:05 localhost nova_compute[280321]: + sudo kolla_copy_cacerts Feb 23 04:39:05 localhost nova_compute[280321]: + [[ ! -n '' ]] Feb 23 04:39:05 localhost nova_compute[280321]: + . kolla_extend_start Feb 23 04:39:05 localhost nova_compute[280321]: + echo 'Running command: '\''nova-compute'\''' Feb 23 04:39:05 localhost nova_compute[280321]: Running command: 'nova-compute' Feb 23 04:39:05 localhost nova_compute[280321]: + umask 0022 Feb 23 04:39:05 localhost nova_compute[280321]: + exec nova-compute Feb 23 04:39:06 localhost nova_compute[280321]: 2026-02-23 09:39:06.746 280325 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:39:06 localhost nova_compute[280321]: 2026-02-23 09:39:06.746 280325 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:39:06 localhost nova_compute[280321]: 2026-02-23 09:39:06.746 280325 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 23 04:39:06 localhost nova_compute[280321]: 2026-02-23 09:39:06.746 280325 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 23 04:39:06 localhost nova_compute[280321]: 2026-02-23 09:39:06.862 280325 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:39:06 localhost nova_compute[280321]: 2026-02-23 09:39:06.883 280325 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:39:06 localhost nova_compute[280321]: 2026-02-23 09:39:06.884 280325 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.341 280325 INFO nova.virt.driver [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.454 280325 INFO nova.compute.provider_config [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.461 280325 DEBUG oslo_concurrency.lockutils [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.461 280325 DEBUG oslo_concurrency.lockutils [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.461 280325 DEBUG oslo_concurrency.lockutils [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.461 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.462 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.462 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.462 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.462 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.462 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.462 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.463 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.463 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.463 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.463 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.463 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.463 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.463 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.463 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.464 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.464 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.464 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.464 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.464 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] console_host = np0005626465.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.464 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.464 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.465 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.465 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.465 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.465 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.465 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.465 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.465 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.466 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.466 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.466 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.466 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.466 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.466 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.467 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.467 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.467 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.467 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] host = np0005626465.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.467 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.467 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.467 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.468 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.468 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.468 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.468 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.468 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.468 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.468 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.469 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.469 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.469 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.469 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.469 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.469 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.470 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.470 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.470 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.470 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.470 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.470 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.471 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.471 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.471 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.471 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.471 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.471 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.471 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.471 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.472 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.472 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.472 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.472 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.472 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.472 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.472 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.473 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.473 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.473 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.473 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.473 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] my_block_storage_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.473 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] my_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.473 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.474 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.474 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.474 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.474 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.474 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.474 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.474 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.475 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.475 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.475 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.475 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.475 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.475 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.475 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.475 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.476 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.476 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.476 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.476 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.476 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.476 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.477 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.477 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.477 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.477 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.477 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.477 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.477 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.477 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.478 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.478 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.478 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.478 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.478 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.478 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.478 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.479 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.479 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.479 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.479 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.479 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.479 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.479 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.479 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.480 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.480 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.480 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.480 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.480 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.480 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.480 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.481 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.481 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.481 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.481 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.481 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.481 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.481 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.481 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.482 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.482 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.482 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.482 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.482 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.482 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.482 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.483 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.483 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.483 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.483 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.483 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.483 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.483 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.484 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.484 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.484 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.484 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.484 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.484 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.485 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.485 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.485 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.485 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.485 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.485 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.485 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.486 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.486 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.486 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.486 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.486 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.486 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.486 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.487 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.487 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.487 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.487 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.487 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.487 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.487 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.488 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.488 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.488 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.488 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.488 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.488 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.488 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.489 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.489 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.489 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.489 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.489 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.489 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.490 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.490 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.490 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.490 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.490 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.490 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.490 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.491 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.491 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.491 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.491 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.491 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.491 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.491 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.491 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.492 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.492 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.492 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.492 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.492 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.492 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.492 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.493 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.493 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.493 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.493 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.493 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.os_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.493 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.493 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.494 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.494 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.494 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.494 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.494 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.494 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.494 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.494 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.495 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.495 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.495 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.495 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.495 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.495 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.495 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.496 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.496 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.496 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.496 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.496 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.496 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.496 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.497 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.497 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.497 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.497 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.497 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.497 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.497 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.497 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.498 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.498 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.498 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.498 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.498 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.499 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.499 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.499 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.499 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.499 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.499 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.499 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.500 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.500 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.500 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.500 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.500 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.500 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.500 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.500 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.501 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.501 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.501 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.501 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.501 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.501 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.502 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.502 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.502 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.502 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.502 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.503 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.503 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.503 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.503 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.503 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.503 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.504 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.504 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.504 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.504 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.504 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.505 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.505 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.505 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.505 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.505 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.505 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.506 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.506 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.506 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.506 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.506 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.506 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.507 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.507 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.507 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.507 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.507 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.507 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.507 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.508 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.508 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.508 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.508 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.508 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.508 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.508 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.509 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.509 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.509 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.509 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.509 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.509 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.509 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.510 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.510 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.510 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.510 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.510 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.510 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.510 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.510 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.511 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.511 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.511 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.511 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.511 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.511 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.511 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.512 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.512 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.512 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.512 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.512 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.512 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.512 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.513 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.513 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.513 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.513 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.513 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.513 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.514 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.514 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.514 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.514 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.514 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.514 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.514 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.515 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.515 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.515 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.515 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.515 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.515 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.515 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.515 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.516 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.516 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.516 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.516 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.516 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.516 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.516 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.517 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.517 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.517 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.517 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.517 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.517 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.517 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.518 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.518 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.518 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.518 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.518 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.518 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.518 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.519 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.barbican_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.519 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.519 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.519 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.519 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.519 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.519 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.520 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.520 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.520 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.520 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.520 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.520 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.520 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.520 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.521 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.521 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.521 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.521 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.521 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.521 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.521 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.522 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.522 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.522 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.522 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.522 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.522 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.522 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.523 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.523 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.523 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.523 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.523 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.523 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.523 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.523 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.524 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.524 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.524 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.524 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.524 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.524 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.524 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.525 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.525 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.525 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.525 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.525 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.525 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.525 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.526 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.526 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.526 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.526 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.526 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.526 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.526 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.527 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.527 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.527 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.527 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.527 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.527 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.527 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.528 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.528 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.528 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.528 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.528 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.528 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.528 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.529 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.529 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.529 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.529 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.529 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.529 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.529 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.530 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.530 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.530 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.530 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.530 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.530 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.530 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.531 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.531 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.531 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.531 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.531 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.531 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.531 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.532 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.532 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.532 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.532 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.532 280325 WARNING oslo_config.cfg [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 23 04:39:07 localhost nova_compute[280321]: live_migration_uri is deprecated for removal in favor of two other options that Feb 23 04:39:07 localhost nova_compute[280321]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 23 04:39:07 localhost nova_compute[280321]: and ``live_migration_inbound_addr`` respectively. Feb 23 04:39:07 localhost nova_compute[280321]: ). Its value may be silently ignored in the future.#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.532 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.533 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.533 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.533 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.533 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.533 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.533 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.533 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.534 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.534 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.534 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.534 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.534 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.534 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.535 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.535 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.536 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.536 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.536 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.rbd_secret_uuid = f1fea371-cb69-578d-a3d0-b5c472a84b46 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.536 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.536 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.536 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.536 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.537 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.537 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.537 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.537 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.537 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.537 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.537 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.538 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.538 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.538 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.538 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.538 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.538 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.539 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.539 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.539 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.539 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.539 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.539 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.539 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.539 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.540 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.540 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.540 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.540 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.540 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.540 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.541 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.541 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.541 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.541 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.541 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.541 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.541 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.542 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.542 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.542 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.542 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.542 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.542 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.542 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.543 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.543 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.543 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.543 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.543 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.543 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.543 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.544 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.544 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.544 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.544 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.544 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.544 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.544 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.545 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.545 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.545 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.545 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.545 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.545 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.546 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.546 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.546 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.546 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.546 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.546 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.546 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.547 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.547 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.547 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.547 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.547 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.547 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.547 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.548 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.548 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.548 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.548 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.548 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.548 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.548 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.548 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.549 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.549 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.549 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.549 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.549 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.549 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.549 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.550 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.550 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.550 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.550 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.550 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.550 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.550 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.551 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.551 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.551 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.551 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.551 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.551 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.551 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.552 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.552 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.552 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.552 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.552 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.552 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.552 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.553 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.553 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.553 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.553 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.553 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.553 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.554 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.554 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.554 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.554 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.554 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.554 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.554 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.555 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.555 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.555 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.555 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.555 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.555 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.555 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.556 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.556 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.556 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.556 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.556 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.556 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.556 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.557 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.557 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.557 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.557 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.557 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.557 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.557 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.558 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.558 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.558 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.558 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.558 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.558 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.558 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.559 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.559 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.559 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.559 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.559 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.559 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.559 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.560 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.560 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.560 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.560 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.560 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.560 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.560 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.561 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.561 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.561 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.561 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.561 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.561 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.561 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.562 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.562 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.562 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.562 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.563 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.563 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.563 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.563 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.563 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.563 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.563 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.564 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.564 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.564 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.564 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.564 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.564 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.564 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.564 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.565 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.565 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.565 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.565 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.565 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.565 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.565 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.566 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.566 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.566 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.566 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.566 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.566 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.567 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.567 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.567 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.567 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.567 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.567 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.567 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.568 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.568 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.568 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.568 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.568 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.568 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.568 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.568 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.569 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.569 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.569 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.569 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.569 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.570 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.570 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.570 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.570 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.570 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.570 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.571 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.571 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.571 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.571 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.571 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.571 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.571 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.571 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.572 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.572 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.572 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.572 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.572 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.572 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.572 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.573 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.573 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.573 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.573 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.573 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.573 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.573 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.574 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.574 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.574 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.574 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.574 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.574 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.574 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.574 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.575 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.575 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.575 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.575 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.575 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.575 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.575 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.576 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.576 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.576 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.576 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.576 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.577 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.577 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.577 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.577 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.577 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.577 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.577 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.578 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.578 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.578 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.578 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.578 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.578 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.578 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.578 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.579 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.579 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.579 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.579 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.579 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.579 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.579 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.580 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.580 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.580 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.580 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.580 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.580 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.580 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.581 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.581 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.581 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.581 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.581 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.581 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.581 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.581 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.582 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.582 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.582 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.582 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.582 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.582 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.583 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.583 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.583 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.583 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.583 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.583 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.583 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.583 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.584 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.584 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.584 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.584 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.584 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.584 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.584 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.584 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.585 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.585 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.585 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.585 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.585 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.585 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.585 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.586 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.586 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.586 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.586 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.586 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.586 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.586 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.587 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.587 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.587 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.587 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.587 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.587 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.587 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.587 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.588 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.588 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.588 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.588 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.588 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.588 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.588 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.589 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.589 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.589 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.589 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.589 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.589 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.589 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.590 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.590 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.590 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.590 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.590 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.590 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.590 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.591 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.591 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.591 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.591 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.591 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.591 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.591 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.592 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.592 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.592 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.592 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.592 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.592 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.592 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.593 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.593 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.593 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.593 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.593 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.593 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.593 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.594 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.594 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.594 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.594 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.594 280325 DEBUG oslo_service.service [None req-bda137bd-2b29-4286-a8e1-6cb99ccd52c7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.595 280325 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.613 280325 INFO nova.virt.node [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Determined node identity 9df77b74-d7d6-46a8-93cb-cadec85557a4 from /var/lib/nova/compute_id#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.613 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.614 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.614 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.614 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.625 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.627 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.628 280325 INFO nova.virt.libvirt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Connection event '1' reason 'None'#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.632 280325 INFO nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Libvirt host capabilities Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 8bb105a9-4892-4676-ace9-e931084902e3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: x86_64 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v4 Feb 23 04:39:07 localhost nova_compute[280321]: AMD Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: tcp Feb 23 04:39:07 localhost nova_compute[280321]: rdma Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 16116612 Feb 23 04:39:07 localhost nova_compute[280321]: 4029153 Feb 23 04:39:07 localhost nova_compute[280321]: 0 Feb 23 04:39:07 localhost nova_compute[280321]: 0 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: selinux Feb 23 04:39:07 localhost nova_compute[280321]: 0 Feb 23 04:39:07 localhost nova_compute[280321]: system_u:system_r:svirt_t:s0 Feb 23 04:39:07 localhost nova_compute[280321]: system_u:system_r:svirt_tcg_t:s0 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: dac Feb 23 04:39:07 localhost nova_compute[280321]: 0 Feb 23 04:39:07 localhost nova_compute[280321]: +107:+107 Feb 23 04:39:07 localhost nova_compute[280321]: +107:+107 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: hvm Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 32 Feb 23 04:39:07 localhost nova_compute[280321]: /usr/libexec/qemu-kvm Feb 23 04:39:07 localhost nova_compute[280321]: pc-i440fx-rhel7.6.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.8.0 Feb 23 04:39:07 localhost nova_compute[280321]: q35 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.6.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.6.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.4.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.5.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.3.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel7.6.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.4.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.2.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.2.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.0.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.0.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.1.0 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: hvm Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 64 Feb 23 04:39:07 localhost nova_compute[280321]: /usr/libexec/qemu-kvm Feb 23 04:39:07 localhost nova_compute[280321]: pc-i440fx-rhel7.6.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.8.0 Feb 23 04:39:07 localhost nova_compute[280321]: q35 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.6.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.6.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.4.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.5.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.3.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel7.6.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.4.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.2.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.2.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.0.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.0.0 Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel8.1.0 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: #033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.637 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.640 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: /usr/libexec/qemu-kvm Feb 23 04:39:07 localhost nova_compute[280321]: kvm Feb 23 04:39:07 localhost nova_compute[280321]: pc-i440fx-rhel7.6.0 Feb 23 04:39:07 localhost nova_compute[280321]: i686 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: rom Feb 23 04:39:07 localhost nova_compute[280321]: pflash Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: yes Feb 23 04:39:07 localhost nova_compute[280321]: no Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: no Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome Feb 23 04:39:07 localhost nova_compute[280321]: AMD Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 486 Feb 23 04:39:07 localhost nova_compute[280321]: 486-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: ClearwaterForest Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: ClearwaterForest-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Conroe Feb 23 04:39:07 localhost nova_compute[280321]: Conroe-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-IBPB Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v4 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v5 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Turin Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Turin-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v1 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v2 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v6 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v7 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: KnightsMill Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: KnightsMill-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G1-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G2 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G2-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G3 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G3-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G4-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G5-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Penryn Feb 23 04:39:07 localhost nova_compute[280321]: Penryn-v1 Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge-v1 Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge-v2 Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Westmere Feb 23 04:39:07 localhost nova_compute[280321]: Westmere-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Westmere-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Westmere-v2 Feb 23 04:39:07 localhost nova_compute[280321]: athlon Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: athlon-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: core2duo Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: core2duo-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: coreduo Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: coreduo-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: kvm32 Feb 23 04:39:07 localhost nova_compute[280321]: kvm32-v1 Feb 23 04:39:07 localhost nova_compute[280321]: kvm64 Feb 23 04:39:07 localhost nova_compute[280321]: kvm64-v1 Feb 23 04:39:07 localhost nova_compute[280321]: n270 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: n270-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: pentium Feb 23 04:39:07 localhost nova_compute[280321]: pentium-v1 Feb 23 04:39:07 localhost nova_compute[280321]: pentium2 Feb 23 04:39:07 localhost nova_compute[280321]: pentium2-v1 Feb 23 04:39:07 localhost nova_compute[280321]: pentium3 Feb 23 04:39:07 localhost nova_compute[280321]: pentium3-v1 Feb 23 04:39:07 localhost nova_compute[280321]: phenom Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: phenom-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: qemu32 Feb 23 04:39:07 localhost nova_compute[280321]: qemu32-v1 Feb 23 04:39:07 localhost nova_compute[280321]: qemu64 Feb 23 04:39:07 localhost nova_compute[280321]: qemu64-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: file Feb 23 04:39:07 localhost nova_compute[280321]: anonymous Feb 23 04:39:07 localhost nova_compute[280321]: memfd Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: disk Feb 23 04:39:07 localhost nova_compute[280321]: cdrom Feb 23 04:39:07 localhost nova_compute[280321]: floppy Feb 23 04:39:07 localhost nova_compute[280321]: lun Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: ide Feb 23 04:39:07 localhost nova_compute[280321]: fdc Feb 23 04:39:07 localhost nova_compute[280321]: scsi Feb 23 04:39:07 localhost nova_compute[280321]: virtio Feb 23 04:39:07 localhost nova_compute[280321]: usb Feb 23 04:39:07 localhost nova_compute[280321]: sata Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: virtio Feb 23 04:39:07 localhost nova_compute[280321]: virtio-transitional Feb 23 04:39:07 localhost nova_compute[280321]: virtio-non-transitional Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: vnc Feb 23 04:39:07 localhost nova_compute[280321]: egl-headless Feb 23 04:39:07 localhost nova_compute[280321]: dbus Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: subsystem Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: default Feb 23 04:39:07 localhost nova_compute[280321]: mandatory Feb 23 04:39:07 localhost nova_compute[280321]: requisite Feb 23 04:39:07 localhost nova_compute[280321]: optional Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: usb Feb 23 04:39:07 localhost nova_compute[280321]: pci Feb 23 04:39:07 localhost nova_compute[280321]: scsi Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: virtio Feb 23 04:39:07 localhost nova_compute[280321]: virtio-transitional Feb 23 04:39:07 localhost nova_compute[280321]: virtio-non-transitional Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: random Feb 23 04:39:07 localhost nova_compute[280321]: egd Feb 23 04:39:07 localhost nova_compute[280321]: builtin Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: path Feb 23 04:39:07 localhost nova_compute[280321]: handle Feb 23 04:39:07 localhost nova_compute[280321]: virtiofs Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: tpm-tis Feb 23 04:39:07 localhost nova_compute[280321]: tpm-crb Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: emulator Feb 23 04:39:07 localhost nova_compute[280321]: external Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 2.0 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: usb Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: pty Feb 23 04:39:07 localhost nova_compute[280321]: unix Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: qemu Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: builtin Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: default Feb 23 04:39:07 localhost nova_compute[280321]: passt Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: isa Feb 23 04:39:07 localhost nova_compute[280321]: hyperv Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: null Feb 23 04:39:07 localhost nova_compute[280321]: vc Feb 23 04:39:07 localhost nova_compute[280321]: pty Feb 23 04:39:07 localhost nova_compute[280321]: dev Feb 23 04:39:07 localhost nova_compute[280321]: file Feb 23 04:39:07 localhost nova_compute[280321]: pipe Feb 23 04:39:07 localhost nova_compute[280321]: stdio Feb 23 04:39:07 localhost nova_compute[280321]: udp Feb 23 04:39:07 localhost nova_compute[280321]: tcp Feb 23 04:39:07 localhost nova_compute[280321]: unix Feb 23 04:39:07 localhost nova_compute[280321]: qemu-vdagent Feb 23 04:39:07 localhost nova_compute[280321]: dbus Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: relaxed Feb 23 04:39:07 localhost nova_compute[280321]: vapic Feb 23 04:39:07 localhost nova_compute[280321]: spinlocks Feb 23 04:39:07 localhost nova_compute[280321]: vpindex Feb 23 04:39:07 localhost nova_compute[280321]: runtime Feb 23 04:39:07 localhost nova_compute[280321]: synic Feb 23 04:39:07 localhost nova_compute[280321]: stimer Feb 23 04:39:07 localhost nova_compute[280321]: reset Feb 23 04:39:07 localhost nova_compute[280321]: vendor_id Feb 23 04:39:07 localhost nova_compute[280321]: frequencies Feb 23 04:39:07 localhost nova_compute[280321]: reenlightenment Feb 23 04:39:07 localhost nova_compute[280321]: tlbflush Feb 23 04:39:07 localhost nova_compute[280321]: ipi Feb 23 04:39:07 localhost nova_compute[280321]: avic Feb 23 04:39:07 localhost nova_compute[280321]: emsr_bitmap Feb 23 04:39:07 localhost nova_compute[280321]: xmm_input Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 4095 Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Linux KVM Hv Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.645 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: /usr/libexec/qemu-kvm Feb 23 04:39:07 localhost nova_compute[280321]: kvm Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.8.0 Feb 23 04:39:07 localhost nova_compute[280321]: i686 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: rom Feb 23 04:39:07 localhost nova_compute[280321]: pflash Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: yes Feb 23 04:39:07 localhost nova_compute[280321]: no Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: no Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome Feb 23 04:39:07 localhost nova_compute[280321]: AMD Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 486 Feb 23 04:39:07 localhost nova_compute[280321]: 486-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: ClearwaterForest Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: ClearwaterForest-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Conroe Feb 23 04:39:07 localhost nova_compute[280321]: Conroe-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-IBPB Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v4 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v5 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Turin Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Turin-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v1 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v2 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v6 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v7 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: KnightsMill Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: KnightsMill-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G1-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G2 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G2-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G3 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G3-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G4-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G5-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Penryn Feb 23 04:39:07 localhost nova_compute[280321]: Penryn-v1 Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge-v1 Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge-v2 Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Westmere Feb 23 04:39:07 localhost nova_compute[280321]: Westmere-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Westmere-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Westmere-v2 Feb 23 04:39:07 localhost nova_compute[280321]: athlon Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: athlon-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: core2duo Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: core2duo-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: coreduo Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: coreduo-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: kvm32 Feb 23 04:39:07 localhost nova_compute[280321]: kvm32-v1 Feb 23 04:39:07 localhost nova_compute[280321]: kvm64 Feb 23 04:39:07 localhost nova_compute[280321]: kvm64-v1 Feb 23 04:39:07 localhost nova_compute[280321]: n270 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: n270-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: pentium Feb 23 04:39:07 localhost nova_compute[280321]: pentium-v1 Feb 23 04:39:07 localhost nova_compute[280321]: pentium2 Feb 23 04:39:07 localhost nova_compute[280321]: pentium2-v1 Feb 23 04:39:07 localhost nova_compute[280321]: pentium3 Feb 23 04:39:07 localhost nova_compute[280321]: pentium3-v1 Feb 23 04:39:07 localhost nova_compute[280321]: phenom Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: phenom-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: qemu32 Feb 23 04:39:07 localhost nova_compute[280321]: qemu32-v1 Feb 23 04:39:07 localhost nova_compute[280321]: qemu64 Feb 23 04:39:07 localhost nova_compute[280321]: qemu64-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: file Feb 23 04:39:07 localhost nova_compute[280321]: anonymous Feb 23 04:39:07 localhost nova_compute[280321]: memfd Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: disk Feb 23 04:39:07 localhost nova_compute[280321]: cdrom Feb 23 04:39:07 localhost nova_compute[280321]: floppy Feb 23 04:39:07 localhost nova_compute[280321]: lun Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: fdc Feb 23 04:39:07 localhost nova_compute[280321]: scsi Feb 23 04:39:07 localhost nova_compute[280321]: virtio Feb 23 04:39:07 localhost nova_compute[280321]: usb Feb 23 04:39:07 localhost nova_compute[280321]: sata Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: virtio Feb 23 04:39:07 localhost nova_compute[280321]: virtio-transitional Feb 23 04:39:07 localhost nova_compute[280321]: virtio-non-transitional Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: vnc Feb 23 04:39:07 localhost nova_compute[280321]: egl-headless Feb 23 04:39:07 localhost nova_compute[280321]: dbus Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: subsystem Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: default Feb 23 04:39:07 localhost nova_compute[280321]: mandatory Feb 23 04:39:07 localhost nova_compute[280321]: requisite Feb 23 04:39:07 localhost nova_compute[280321]: optional Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: usb Feb 23 04:39:07 localhost nova_compute[280321]: pci Feb 23 04:39:07 localhost nova_compute[280321]: scsi Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: virtio Feb 23 04:39:07 localhost nova_compute[280321]: virtio-transitional Feb 23 04:39:07 localhost nova_compute[280321]: virtio-non-transitional Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: random Feb 23 04:39:07 localhost nova_compute[280321]: egd Feb 23 04:39:07 localhost nova_compute[280321]: builtin Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: path Feb 23 04:39:07 localhost nova_compute[280321]: handle Feb 23 04:39:07 localhost nova_compute[280321]: virtiofs Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: tpm-tis Feb 23 04:39:07 localhost nova_compute[280321]: tpm-crb Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: emulator Feb 23 04:39:07 localhost nova_compute[280321]: external Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 2.0 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: usb Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: pty Feb 23 04:39:07 localhost nova_compute[280321]: unix Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: qemu Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: builtin Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: default Feb 23 04:39:07 localhost nova_compute[280321]: passt Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: isa Feb 23 04:39:07 localhost nova_compute[280321]: hyperv Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: null Feb 23 04:39:07 localhost nova_compute[280321]: vc Feb 23 04:39:07 localhost nova_compute[280321]: pty Feb 23 04:39:07 localhost nova_compute[280321]: dev Feb 23 04:39:07 localhost nova_compute[280321]: file Feb 23 04:39:07 localhost nova_compute[280321]: pipe Feb 23 04:39:07 localhost nova_compute[280321]: stdio Feb 23 04:39:07 localhost nova_compute[280321]: udp Feb 23 04:39:07 localhost nova_compute[280321]: tcp Feb 23 04:39:07 localhost nova_compute[280321]: unix Feb 23 04:39:07 localhost nova_compute[280321]: qemu-vdagent Feb 23 04:39:07 localhost nova_compute[280321]: dbus Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: relaxed Feb 23 04:39:07 localhost nova_compute[280321]: vapic Feb 23 04:39:07 localhost nova_compute[280321]: spinlocks Feb 23 04:39:07 localhost nova_compute[280321]: vpindex Feb 23 04:39:07 localhost nova_compute[280321]: runtime Feb 23 04:39:07 localhost nova_compute[280321]: synic Feb 23 04:39:07 localhost nova_compute[280321]: stimer Feb 23 04:39:07 localhost nova_compute[280321]: reset Feb 23 04:39:07 localhost nova_compute[280321]: vendor_id Feb 23 04:39:07 localhost nova_compute[280321]: frequencies Feb 23 04:39:07 localhost nova_compute[280321]: reenlightenment Feb 23 04:39:07 localhost nova_compute[280321]: tlbflush Feb 23 04:39:07 localhost nova_compute[280321]: ipi Feb 23 04:39:07 localhost nova_compute[280321]: avic Feb 23 04:39:07 localhost nova_compute[280321]: emsr_bitmap Feb 23 04:39:07 localhost nova_compute[280321]: xmm_input Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 4095 Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Linux KVM Hv Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.718 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.721 280325 DEBUG nova.virt.libvirt.volume.mount [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.725 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: /usr/libexec/qemu-kvm Feb 23 04:39:07 localhost nova_compute[280321]: kvm Feb 23 04:39:07 localhost nova_compute[280321]: pc-i440fx-rhel7.6.0 Feb 23 04:39:07 localhost nova_compute[280321]: x86_64 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: rom Feb 23 04:39:07 localhost nova_compute[280321]: pflash Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: yes Feb 23 04:39:07 localhost nova_compute[280321]: no Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: no Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome Feb 23 04:39:07 localhost nova_compute[280321]: AMD Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 486 Feb 23 04:39:07 localhost nova_compute[280321]: 486-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: ClearwaterForest Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: ClearwaterForest-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Conroe Feb 23 04:39:07 localhost nova_compute[280321]: Conroe-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-IBPB Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v4 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v5 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Turin Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Turin-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v1 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v2 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v6 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Icelake-Server-v7 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: IvyBridge-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: KnightsMill Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: KnightsMill-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Nehalem-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G1-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G2 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G2-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G3 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G3-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G4-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Opteron_G5-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Penryn Feb 23 04:39:07 localhost nova_compute[280321]: Penryn-v1 Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge-v1 Feb 23 04:39:07 localhost nova_compute[280321]: SandyBridge-v2 Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SapphireRapids-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: SierraForest-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Client-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Skylake-Server-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Snowridge-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Westmere Feb 23 04:39:07 localhost nova_compute[280321]: Westmere-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Westmere-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Westmere-v2 Feb 23 04:39:07 localhost nova_compute[280321]: athlon Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: athlon-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: core2duo Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: core2duo-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: coreduo Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: coreduo-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: kvm32 Feb 23 04:39:07 localhost nova_compute[280321]: kvm32-v1 Feb 23 04:39:07 localhost nova_compute[280321]: kvm64 Feb 23 04:39:07 localhost nova_compute[280321]: kvm64-v1 Feb 23 04:39:07 localhost nova_compute[280321]: n270 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: n270-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: pentium Feb 23 04:39:07 localhost nova_compute[280321]: pentium-v1 Feb 23 04:39:07 localhost nova_compute[280321]: pentium2 Feb 23 04:39:07 localhost nova_compute[280321]: pentium2-v1 Feb 23 04:39:07 localhost nova_compute[280321]: pentium3 Feb 23 04:39:07 localhost nova_compute[280321]: pentium3-v1 Feb 23 04:39:07 localhost nova_compute[280321]: phenom Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: phenom-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: qemu32 Feb 23 04:39:07 localhost nova_compute[280321]: qemu32-v1 Feb 23 04:39:07 localhost nova_compute[280321]: qemu64 Feb 23 04:39:07 localhost nova_compute[280321]: qemu64-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: file Feb 23 04:39:07 localhost nova_compute[280321]: anonymous Feb 23 04:39:07 localhost nova_compute[280321]: memfd Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: disk Feb 23 04:39:07 localhost nova_compute[280321]: cdrom Feb 23 04:39:07 localhost nova_compute[280321]: floppy Feb 23 04:39:07 localhost nova_compute[280321]: lun Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: ide Feb 23 04:39:07 localhost nova_compute[280321]: fdc Feb 23 04:39:07 localhost nova_compute[280321]: scsi Feb 23 04:39:07 localhost nova_compute[280321]: virtio Feb 23 04:39:07 localhost nova_compute[280321]: usb Feb 23 04:39:07 localhost nova_compute[280321]: sata Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: virtio Feb 23 04:39:07 localhost nova_compute[280321]: virtio-transitional Feb 23 04:39:07 localhost nova_compute[280321]: virtio-non-transitional Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: vnc Feb 23 04:39:07 localhost nova_compute[280321]: egl-headless Feb 23 04:39:07 localhost nova_compute[280321]: dbus Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: subsystem Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: default Feb 23 04:39:07 localhost nova_compute[280321]: mandatory Feb 23 04:39:07 localhost nova_compute[280321]: requisite Feb 23 04:39:07 localhost nova_compute[280321]: optional Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: usb Feb 23 04:39:07 localhost nova_compute[280321]: pci Feb 23 04:39:07 localhost nova_compute[280321]: scsi Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: virtio Feb 23 04:39:07 localhost nova_compute[280321]: virtio-transitional Feb 23 04:39:07 localhost nova_compute[280321]: virtio-non-transitional Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: random Feb 23 04:39:07 localhost nova_compute[280321]: egd Feb 23 04:39:07 localhost nova_compute[280321]: builtin Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: path Feb 23 04:39:07 localhost nova_compute[280321]: handle Feb 23 04:39:07 localhost nova_compute[280321]: virtiofs Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: tpm-tis Feb 23 04:39:07 localhost nova_compute[280321]: tpm-crb Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: emulator Feb 23 04:39:07 localhost nova_compute[280321]: external Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 2.0 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: usb Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: pty Feb 23 04:39:07 localhost nova_compute[280321]: unix Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: qemu Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: builtin Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: default Feb 23 04:39:07 localhost nova_compute[280321]: passt Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: isa Feb 23 04:39:07 localhost nova_compute[280321]: hyperv Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: null Feb 23 04:39:07 localhost nova_compute[280321]: vc Feb 23 04:39:07 localhost nova_compute[280321]: pty Feb 23 04:39:07 localhost nova_compute[280321]: dev Feb 23 04:39:07 localhost nova_compute[280321]: file Feb 23 04:39:07 localhost nova_compute[280321]: pipe Feb 23 04:39:07 localhost nova_compute[280321]: stdio Feb 23 04:39:07 localhost nova_compute[280321]: udp Feb 23 04:39:07 localhost nova_compute[280321]: tcp Feb 23 04:39:07 localhost nova_compute[280321]: unix Feb 23 04:39:07 localhost nova_compute[280321]: qemu-vdagent Feb 23 04:39:07 localhost nova_compute[280321]: dbus Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: relaxed Feb 23 04:39:07 localhost nova_compute[280321]: vapic Feb 23 04:39:07 localhost nova_compute[280321]: spinlocks Feb 23 04:39:07 localhost nova_compute[280321]: vpindex Feb 23 04:39:07 localhost nova_compute[280321]: runtime Feb 23 04:39:07 localhost nova_compute[280321]: synic Feb 23 04:39:07 localhost nova_compute[280321]: stimer Feb 23 04:39:07 localhost nova_compute[280321]: reset Feb 23 04:39:07 localhost nova_compute[280321]: vendor_id Feb 23 04:39:07 localhost nova_compute[280321]: frequencies Feb 23 04:39:07 localhost nova_compute[280321]: reenlightenment Feb 23 04:39:07 localhost nova_compute[280321]: tlbflush Feb 23 04:39:07 localhost nova_compute[280321]: ipi Feb 23 04:39:07 localhost nova_compute[280321]: avic Feb 23 04:39:07 localhost nova_compute[280321]: emsr_bitmap Feb 23 04:39:07 localhost nova_compute[280321]: xmm_input Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 4095 Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Linux KVM Hv Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 23 04:39:07 localhost nova_compute[280321]: 2026-02-23 09:39:07.791 280325 DEBUG nova.virt.libvirt.host [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: /usr/libexec/qemu-kvm Feb 23 04:39:07 localhost nova_compute[280321]: kvm Feb 23 04:39:07 localhost nova_compute[280321]: pc-q35-rhel9.8.0 Feb 23 04:39:07 localhost nova_compute[280321]: x86_64 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: efi Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 23 04:39:07 localhost nova_compute[280321]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 23 04:39:07 localhost nova_compute[280321]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 23 04:39:07 localhost nova_compute[280321]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: rom Feb 23 04:39:07 localhost nova_compute[280321]: pflash Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: yes Feb 23 04:39:07 localhost nova_compute[280321]: no Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: yes Feb 23 04:39:07 localhost nova_compute[280321]: no Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: on Feb 23 04:39:07 localhost nova_compute[280321]: off Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome Feb 23 04:39:07 localhost nova_compute[280321]: AMD Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: 486 Feb 23 04:39:07 localhost nova_compute[280321]: 486-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Broadwell-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cascadelake-Server-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: ClearwaterForest Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: ClearwaterForest-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Conroe Feb 23 04:39:07 localhost nova_compute[280321]: Conroe-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Cooperlake-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Denverton-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Dhyana-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Genoa-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-IBPB Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Milan-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v4 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Rome-v5 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Turin Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-Turin-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v1 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v2 Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v4 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: EPYC-v5 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: GraniteRapids-v3 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-noTSX Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-noTSX-IBRS Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v1 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Haswell-v2 Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:39:07 localhost nova_compute[280321]: Feb 23 04:42:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15482 DF PROTO=TCP SPT=33196 DPT=9102 SEQ=3602662931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A012FFD430000000001030307) Feb 23 04:42:19 localhost rsyslogd[758]: imjournal: 1718 messages lost due to rate-limiting (20000 allowed within 600 seconds) Feb 23 04:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:42:24 localhost podman[283813]: 2026-02-23 09:42:24.970105212 +0000 UTC m=+0.047885690 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:42:25 localhost podman[283814]: 2026-02-23 09:42:25.029547464 +0000 UTC m=+0.105345452 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Feb 23 04:42:25 localhost podman[283814]: 2026-02-23 09:42:25.040795006 +0000 UTC m=+0.116593044 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:42:25 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:42:25 localhost podman[283813]: 2026-02-23 09:42:25.055144345 +0000 UTC m=+0.132924833 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:42:25 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:42:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:8b:ea:9c MACDST=fa:16:3e:11:f4:23 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15483 DF PROTO=TCP SPT=33196 DPT=9102 SEQ=3602662931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A01301D840000000001030307) Feb 23 04:42:29 localhost sshd[283884]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:42:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:42:29 localhost systemd[1]: Created slice User Slice of UID 1003. Feb 23 04:42:29 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Feb 23 04:42:29 localhost systemd-logind[759]: New session 61 of user tripleo-admin. Feb 23 04:42:29 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Feb 23 04:42:29 localhost systemd[1]: Starting User Manager for UID 1003... Feb 23 04:42:29 localhost podman[283886]: 2026-02-23 09:42:29.332073044 +0000 UTC m=+0.103574638 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:42:29 localhost podman[283886]: 2026-02-23 09:42:29.341244154 +0000 UTC m=+0.112745778 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:42:29 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:42:29 localhost systemd[283904]: Queued start job for default target Main User Target. Feb 23 04:42:29 localhost systemd[283904]: Created slice User Application Slice. Feb 23 04:42:29 localhost systemd[283904]: Started Mark boot as successful after the user session has run 2 minutes. Feb 23 04:42:29 localhost systemd[283904]: Started Daily Cleanup of User's Temporary Directories. Feb 23 04:42:29 localhost systemd[283904]: Reached target Paths. Feb 23 04:42:29 localhost systemd[283904]: Reached target Timers. Feb 23 04:42:29 localhost systemd[283904]: Starting D-Bus User Message Bus Socket... Feb 23 04:42:29 localhost systemd[283904]: Starting Create User's Volatile Files and Directories... Feb 23 04:42:29 localhost systemd[283904]: Listening on D-Bus User Message Bus Socket. Feb 23 04:42:29 localhost systemd[283904]: Reached target Sockets. Feb 23 04:42:29 localhost systemd[283904]: Finished Create User's Volatile Files and Directories. Feb 23 04:42:29 localhost systemd[283904]: Reached target Basic System. Feb 23 04:42:29 localhost systemd[283904]: Reached target Main User Target. Feb 23 04:42:29 localhost systemd[283904]: Startup finished in 154ms. Feb 23 04:42:29 localhost systemd[1]: Started User Manager for UID 1003. Feb 23 04:42:29 localhost systemd[1]: Started Session 61 of User tripleo-admin. Feb 23 04:42:29 localhost sshd[283927]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:42:30 localhost python3[284056]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)#012add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard (8443)#012add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana (3100)#012add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus (9092)#012add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw (8080)#012add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon (6789, 3300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds (6800-7300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr (6800-7300, 8444)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs (2049, 12049)#012add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"#012# 123 ceph_dashboard (9090, 9094, 9283)#012add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:42:30 localhost systemd-journald[48305]: Field hash table of /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal has a fill level at 80.5 (268 of 333 items), suggesting rotation. Feb 23 04:42:30 localhost systemd-journald[48305]: /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 04:42:30 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 04:42:30 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 04:42:31 localhost python3[284201]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 23 04:42:31 localhost systemd[1]: Stopping Netfilter Tables... Feb 23 04:42:31 localhost systemd[1]: nftables.service: Deactivated successfully. Feb 23 04:42:31 localhost systemd[1]: Stopped Netfilter Tables. Feb 23 04:42:31 localhost systemd[1]: Starting Netfilter Tables... Feb 23 04:42:31 localhost systemd[1]: Finished Netfilter Tables. Feb 23 04:42:31 localhost openstack_network_exporter[243519]: ERROR 09:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:42:31 localhost openstack_network_exporter[243519]: Feb 23 04:42:31 localhost openstack_network_exporter[243519]: ERROR 09:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:42:31 localhost openstack_network_exporter[243519]: Feb 23 04:42:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:42:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:42:38 localhost podman[284280]: 2026-02-23 09:42:38.353731042 +0000 UTC m=+0.066229720 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1770267347, version=9.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git) Feb 23 04:42:38 localhost podman[284279]: 2026-02-23 09:42:38.418226988 +0000 UTC m=+0.130190400 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:42:38 localhost podman[284279]: 2026-02-23 09:42:38.429879333 +0000 UTC m=+0.141842805 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:42:38 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:42:38 localhost podman[284280]: 2026-02-23 09:42:38.44259954 +0000 UTC m=+0.155098258 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_id=openstack_network_exporter, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, name=ubi9/ubi-minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 23 04:42:38 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:42:42 localhost podman[241086]: time="2026-02-23T09:42:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:42:42 localhost podman[241086]: @ - - [23/Feb/2026:09:42:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147347 "" "Go-http-client/1.1" Feb 23 04:42:42 localhost podman[241086]: @ - - [23/Feb/2026:09:42:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16322 "" "Go-http-client/1.1" Feb 23 04:42:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:42:44 localhost podman[284434]: 2026-02-23 09:42:44.986535797 +0000 UTC m=+0.062364171 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:42:45 localhost podman[284434]: 2026-02-23 09:42:45.092503867 +0000 UTC m=+0.168332202 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Feb 23 04:42:45 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:42:46 localhost podman[284540]: Feb 23 04:42:46 localhost podman[284540]: 2026-02-23 09:42:46.218928071 +0000 UTC m=+0.077182644 container create e3aa27afa3c9eba0ab8b9e1e5d4cf276283023e946fb6e29593e4897233b67aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lamport, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, release=1770267347, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:42:46 localhost systemd[1]: Started libpod-conmon-e3aa27afa3c9eba0ab8b9e1e5d4cf276283023e946fb6e29593e4897233b67aa.scope. Feb 23 04:42:46 localhost systemd[1]: Started libcrun container. Feb 23 04:42:46 localhost podman[284540]: 2026-02-23 09:42:46.285650144 +0000 UTC m=+0.143904717 container init e3aa27afa3c9eba0ab8b9e1e5d4cf276283023e946fb6e29593e4897233b67aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lamport, architecture=x86_64, release=1770267347, RELEASE=main, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_BRANCH=main) Feb 23 04:42:46 localhost podman[284540]: 2026-02-23 09:42:46.192044652 +0000 UTC m=+0.050299255 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:42:46 localhost podman[284540]: 2026-02-23 09:42:46.298984281 +0000 UTC m=+0.157238854 container start e3aa27afa3c9eba0ab8b9e1e5d4cf276283023e946fb6e29593e4897233b67aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lamport, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main, architecture=x86_64, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:42:46 localhost podman[284540]: 2026-02-23 09:42:46.299361672 +0000 UTC m=+0.157616255 container attach e3aa27afa3c9eba0ab8b9e1e5d4cf276283023e946fb6e29593e4897233b67aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lamport, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, release=1770267347, architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 04:42:46 localhost admiring_lamport[284556]: 167 167 Feb 23 04:42:46 localhost systemd[1]: libpod-e3aa27afa3c9eba0ab8b9e1e5d4cf276283023e946fb6e29593e4897233b67aa.scope: Deactivated successfully. Feb 23 04:42:46 localhost podman[284540]: 2026-02-23 09:42:46.302160628 +0000 UTC m=+0.160415211 container died e3aa27afa3c9eba0ab8b9e1e5d4cf276283023e946fb6e29593e4897233b67aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lamport, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:42:46 localhost podman[284561]: 2026-02-23 09:42:46.400847745 +0000 UTC m=+0.089059235 container remove e3aa27afa3c9eba0ab8b9e1e5d4cf276283023e946fb6e29593e4897233b67aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lamport, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, name=rhceph, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, RELEASE=main, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 04:42:46 localhost systemd[1]: libpod-conmon-e3aa27afa3c9eba0ab8b9e1e5d4cf276283023e946fb6e29593e4897233b67aa.scope: Deactivated successfully. Feb 23 04:42:46 localhost systemd[1]: Reloading. Feb 23 04:42:46 localhost systemd-rc-local-generator[284601]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:42:46 localhost systemd-sysv-generator[284606]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: var-lib-containers-storage-overlay-29e305fb7e22046708fe65aa9279ef11fb93ee7073b5f38336c78c009f925f6e-merged.mount: Deactivated successfully. Feb 23 04:42:46 localhost systemd[1]: Reloading. Feb 23 04:42:46 localhost systemd-rc-local-generator[284648]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:42:46 localhost systemd-sysv-generator[284651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:42:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:42:47 localhost systemd[1]: Starting Ceph mds.mds.np0005626465.drvnoy for f1fea371-cb69-578d-a3d0-b5c472a84b46... Feb 23 04:42:47 localhost podman[284707]: Feb 23 04:42:47 localhost podman[284707]: 2026-02-23 09:42:47.579271784 +0000 UTC m=+0.088467038 container create 4d0f1a071d297e1cf3f34d9911150e8b7b42c3cfe2af58cacbea620112214aa4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mds-mds-np0005626465-drvnoy, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, GIT_BRANCH=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, distribution-scope=public, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Feb 23 04:42:47 localhost systemd[1]: tmp-crun.2IX2AI.mount: Deactivated successfully. Feb 23 04:42:47 localhost podman[284707]: 2026-02-23 09:42:47.543773352 +0000 UTC m=+0.052968636 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:42:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec88bbc5bb6b03ae40d603d95efabae42eb1639c4c881b538721241250d2a68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 04:42:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec88bbc5bb6b03ae40d603d95efabae42eb1639c4c881b538721241250d2a68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 04:42:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec88bbc5bb6b03ae40d603d95efabae42eb1639c4c881b538721241250d2a68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 04:42:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec88bbc5bb6b03ae40d603d95efabae42eb1639c4c881b538721241250d2a68/merged/var/lib/ceph/mds/ceph-mds.np0005626465.drvnoy supports timestamps until 2038 (0x7fffffff) Feb 23 04:42:47 localhost podman[284707]: 2026-02-23 09:42:47.655977811 +0000 UTC m=+0.165173075 container init 4d0f1a071d297e1cf3f34d9911150e8b7b42c3cfe2af58cacbea620112214aa4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mds-mds-np0005626465-drvnoy, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_BRANCH=main, name=rhceph, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:42:47 localhost podman[284707]: 2026-02-23 09:42:47.669490694 +0000 UTC m=+0.178685948 container start 4d0f1a071d297e1cf3f34d9911150e8b7b42c3cfe2af58cacbea620112214aa4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mds-mds-np0005626465-drvnoy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:42:47 localhost bash[284707]: 4d0f1a071d297e1cf3f34d9911150e8b7b42c3cfe2af58cacbea620112214aa4 Feb 23 04:42:47 localhost systemd[1]: Started Ceph mds.mds.np0005626465.drvnoy for f1fea371-cb69-578d-a3d0-b5c472a84b46. Feb 23 04:42:47 localhost ceph-mds[284726]: set uid:gid to 167:167 (ceph:ceph) Feb 23 04:42:47 localhost ceph-mds[284726]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mds, pid 2 Feb 23 04:42:47 localhost ceph-mds[284726]: main not setting numa affinity Feb 23 04:42:47 localhost ceph-mds[284726]: pidfile_write: ignore empty --pid-file Feb 23 04:42:47 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mds-mds-np0005626465-drvnoy[284722]: starting mds.mds.np0005626465.drvnoy at Feb 23 04:42:47 localhost ceph-mds[284726]: mds.mds.np0005626465.drvnoy Updating MDS map to version 8 from mon.2 Feb 23 04:42:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:42:48.298 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:42:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:42:48.298 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:42:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:42:48.299 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:42:48 localhost ceph-mds[284726]: mds.mds.np0005626465.drvnoy Updating MDS map to version 9 from mon.2 Feb 23 04:42:48 localhost ceph-mds[284726]: mds.mds.np0005626465.drvnoy Monitors have assigned me to become a standby. Feb 23 04:42:51 localhost podman[284872]: 2026-02-23 09:42:51.502797352 +0000 UTC m=+0.085929579 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.42.2, architecture=x86_64, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.) Feb 23 04:42:51 localhost podman[284872]: 2026-02-23 09:42:51.688275036 +0000 UTC m=+0.271407263 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7) Feb 23 04:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:42:55 localhost systemd[1]: session-60.scope: Deactivated successfully. Feb 23 04:42:55 localhost systemd-logind[759]: Session 60 logged out. Waiting for processes to exit. Feb 23 04:42:55 localhost systemd-logind[759]: Removed session 60. Feb 23 04:42:56 localhost systemd[1]: tmp-crun.Tc1aZt.mount: Deactivated successfully. Feb 23 04:42:56 localhost podman[284991]: 2026-02-23 09:42:56.046392 +0000 UTC m=+0.109874090 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Feb 23 04:42:56 localhost podman[284992]: 2026-02-23 09:42:56.088223535 +0000 UTC m=+0.149524888 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.098 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:42:56.098 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:42:56 localhost podman[284992]: 2026-02-23 09:42:56.127809961 +0000 UTC m=+0.189111314 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:42:56 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:42:56 localhost podman[284991]: 2026-02-23 09:42:56.178404683 +0000 UTC m=+0.241886753 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:42:56 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:42:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:43:00 localhost podman[285028]: 2026-02-23 09:43:00.008139584 +0000 UTC m=+0.083308980 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:43:00 localhost podman[285028]: 2026-02-23 09:43:00.022947636 +0000 UTC m=+0.098117012 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:43:00 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:43:01 localhost openstack_network_exporter[243519]: ERROR 09:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:43:01 localhost openstack_network_exporter[243519]: Feb 23 04:43:01 localhost openstack_network_exporter[243519]: ERROR 09:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:43:01 localhost openstack_network_exporter[243519]: Feb 23 04:43:07 localhost nova_compute[280321]: 2026-02-23 09:43:07.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:43:07 localhost nova_compute[280321]: 2026-02-23 09:43:07.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:43:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:43:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.917 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.917 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.918 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.918 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.936 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.937 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.937 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.937 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:43:08 localhost nova_compute[280321]: 2026-02-23 09:43:08.938 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:43:09 localhost systemd[1]: tmp-crun.KSosFy.mount: Deactivated successfully. Feb 23 04:43:09 localhost podman[285054]: 2026-02-23 09:43:09.063675252 +0000 UTC m=+0.133548452 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, name=ubi9/ubi-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1770267347, architecture=x86_64, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 04:43:09 localhost podman[285053]: 2026-02-23 09:43:09.031527283 +0000 UTC m=+0.105406504 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:43:09 localhost podman[285054]: 2026-02-23 09:43:09.106908329 +0000 UTC m=+0.176781529 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, io.openshift.expose-services=, name=ubi9/ubi-minimal, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64) Feb 23 04:43:09 localhost podman[285053]: 2026-02-23 09:43:09.115996027 +0000 UTC m=+0.189875248 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:43:09 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:43:09 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:43:09 localhost nova_compute[280321]: 2026-02-23 09:43:09.427 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:43:09 localhost nova_compute[280321]: 2026-02-23 09:43:09.633 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:43:09 localhost nova_compute[280321]: 2026-02-23 09:43:09.635 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12866MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:43:09 localhost nova_compute[280321]: 2026-02-23 09:43:09.635 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:43:09 localhost nova_compute[280321]: 2026-02-23 09:43:09.636 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:43:09 localhost nova_compute[280321]: 2026-02-23 09:43:09.715 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:43:09 localhost nova_compute[280321]: 2026-02-23 09:43:09.716 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:43:09 localhost nova_compute[280321]: 2026-02-23 09:43:09.739 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:43:10 localhost nova_compute[280321]: 2026-02-23 09:43:10.169 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:43:10 localhost nova_compute[280321]: 2026-02-23 09:43:10.178 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:43:10 localhost nova_compute[280321]: 2026-02-23 09:43:10.198 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:43:10 localhost nova_compute[280321]: 2026-02-23 09:43:10.200 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:43:10 localhost nova_compute[280321]: 2026-02-23 09:43:10.201 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:43:12 localhost nova_compute[280321]: 2026-02-23 09:43:12.175 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:43:12 localhost nova_compute[280321]: 2026-02-23 09:43:12.176 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:43:12 localhost nova_compute[280321]: 2026-02-23 09:43:12.198 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:43:12 localhost nova_compute[280321]: 2026-02-23 09:43:12.199 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:43:12 localhost nova_compute[280321]: 2026-02-23 09:43:12.199 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:43:12 localhost podman[241086]: time="2026-02-23T09:43:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:43:12 localhost podman[241086]: @ - - [23/Feb/2026:09:43:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149553 "" "Go-http-client/1.1" Feb 23 04:43:12 localhost podman[241086]: @ - - [23/Feb/2026:09:43:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16805 "" "Go-http-client/1.1" Feb 23 04:43:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:43:16 localhost systemd[1]: tmp-crun.iAmf2z.mount: Deactivated successfully. Feb 23 04:43:16 localhost podman[285139]: 2026-02-23 09:43:16.015235129 +0000 UTC m=+0.090020319 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:43:16 localhost podman[285139]: 2026-02-23 09:43:16.131864937 +0000 UTC m=+0.206650087 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:43:16 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:43:19 localhost sshd[285233]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:43:27 localhost podman[285253]: 2026-02-23 09:43:27.040807594 +0000 UTC m=+0.104117867 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216) Feb 23 04:43:27 localhost podman[285254]: 2026-02-23 09:43:27.076757281 +0000 UTC m=+0.138282359 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 04:43:27 localhost podman[285254]: 2026-02-23 09:43:27.091733088 +0000 UTC m=+0.153258186 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:43:27 localhost podman[285253]: 2026-02-23 09:43:27.100553998 +0000 UTC m=+0.163864291 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0) Feb 23 04:43:27 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:43:27 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:43:30 localhost systemd[1]: session-61.scope: Deactivated successfully. Feb 23 04:43:30 localhost systemd[1]: session-61.scope: Consumed 1.279s CPU time. Feb 23 04:43:30 localhost systemd-logind[759]: Session 61 logged out. Waiting for processes to exit. Feb 23 04:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:43:30 localhost systemd-logind[759]: Removed session 61. Feb 23 04:43:30 localhost podman[285290]: 2026-02-23 09:43:30.574566519 +0000 UTC m=+0.072194554 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:43:30 localhost podman[285290]: 2026-02-23 09:43:30.588793453 +0000 UTC m=+0.086421518 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:43:30 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:43:31 localhost openstack_network_exporter[243519]: ERROR 09:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:43:31 localhost openstack_network_exporter[243519]: Feb 23 04:43:31 localhost openstack_network_exporter[243519]: ERROR 09:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:43:31 localhost openstack_network_exporter[243519]: Feb 23 04:43:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:43:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:43:40 localhost systemd[1]: tmp-crun.COkbDY.mount: Deactivated successfully. Feb 23 04:43:40 localhost podman[285313]: 2026-02-23 09:43:40.021641897 +0000 UTC m=+0.089611265 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:43:40 localhost podman[285313]: 2026-02-23 09:43:40.055363145 +0000 UTC m=+0.123332463 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:43:40 localhost systemd[1]: tmp-crun.Kq0cK0.mount: Deactivated successfully. Feb 23 04:43:40 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:43:40 localhost podman[285314]: 2026-02-23 09:43:40.071826208 +0000 UTC m=+0.138278520 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, release=1770267347, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, architecture=x86_64, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 04:43:40 localhost podman[285314]: 2026-02-23 09:43:40.085882117 +0000 UTC m=+0.152334479 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.7, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, name=ubi9/ubi-minimal) Feb 23 04:43:40 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:43:40 localhost systemd[1]: Stopping User Manager for UID 1003... Feb 23 04:43:40 localhost systemd[283904]: Activating special unit Exit the Session... Feb 23 04:43:40 localhost systemd[283904]: Stopped target Main User Target. Feb 23 04:43:40 localhost systemd[283904]: Stopped target Basic System. Feb 23 04:43:40 localhost systemd[283904]: Stopped target Paths. Feb 23 04:43:40 localhost systemd[283904]: Stopped target Sockets. Feb 23 04:43:40 localhost systemd[283904]: Stopped target Timers. Feb 23 04:43:40 localhost systemd[283904]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 23 04:43:40 localhost systemd[283904]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 04:43:40 localhost systemd[283904]: Closed D-Bus User Message Bus Socket. Feb 23 04:43:40 localhost systemd[283904]: Stopped Create User's Volatile Files and Directories. Feb 23 04:43:40 localhost systemd[283904]: Removed slice User Application Slice. Feb 23 04:43:40 localhost systemd[283904]: Reached target Shutdown. Feb 23 04:43:40 localhost systemd[283904]: Finished Exit the Session. Feb 23 04:43:40 localhost systemd[283904]: Reached target Exit the Session. Feb 23 04:43:40 localhost systemd[1]: user@1003.service: Deactivated successfully. Feb 23 04:43:40 localhost systemd[1]: Stopped User Manager for UID 1003. Feb 23 04:43:40 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Feb 23 04:43:40 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Feb 23 04:43:40 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Feb 23 04:43:40 localhost systemd[1]: Removed slice User Slice of UID 1003. Feb 23 04:43:40 localhost systemd[1]: user-1003.slice: Consumed 1.656s CPU time. Feb 23 04:43:41 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Feb 23 04:43:42 localhost podman[241086]: time="2026-02-23T09:43:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:43:42 localhost podman[241086]: @ - - [23/Feb/2026:09:43:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149553 "" "Go-http-client/1.1" Feb 23 04:43:42 localhost podman[241086]: @ - - [23/Feb/2026:09:43:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16809 "" "Go-http-client/1.1" Feb 23 04:43:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:43:47 localhost podman[285411]: 2026-02-23 09:43:47.017808947 +0000 UTC m=+0.092933007 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0) Feb 23 04:43:47 localhost podman[285411]: 2026-02-23 09:43:47.089058641 +0000 UTC m=+0.164182731 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:43:47 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:43:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:43:48.299 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:43:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:43:48.300 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:43:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:43:48.300 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:43:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:43:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:43:58 localhost podman[285436]: 2026-02-23 09:43:58.011416978 +0000 UTC m=+0.083362245 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Feb 23 04:43:58 localhost podman[285437]: 2026-02-23 09:43:58.074732069 +0000 UTC m=+0.141001233 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216) Feb 23 04:43:58 localhost podman[285437]: 2026-02-23 09:43:58.084517649 +0000 UTC m=+0.150786833 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:43:58 localhost podman[285436]: 2026-02-23 09:43:58.09603249 +0000 UTC m=+0.167977797 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:43:58 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:43:58 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:44:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:44:01 localhost podman[285472]: 2026-02-23 09:44:01.008605351 +0000 UTC m=+0.086407467 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:44:01 localhost podman[285472]: 2026-02-23 09:44:01.019886205 +0000 UTC m=+0.097688341 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:44:01 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:44:01 localhost openstack_network_exporter[243519]: ERROR 09:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:44:01 localhost openstack_network_exporter[243519]: Feb 23 04:44:01 localhost openstack_network_exporter[243519]: ERROR 09:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:44:01 localhost openstack_network_exporter[243519]: Feb 23 04:44:06 localhost nova_compute[280321]: 2026-02-23 09:44:06.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:06 localhost nova_compute[280321]: 2026-02-23 09:44:06.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 23 04:44:06 localhost nova_compute[280321]: 2026-02-23 09:44:06.904 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 23 04:44:06 localhost nova_compute[280321]: 2026-02-23 09:44:06.905 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:06 localhost nova_compute[280321]: 2026-02-23 09:44:06.905 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 23 04:44:06 localhost nova_compute[280321]: 2026-02-23 09:44:06.915 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:08 localhost nova_compute[280321]: 2026-02-23 09:44:08.924 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:08 localhost nova_compute[280321]: 2026-02-23 09:44:08.924 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:09 localhost nova_compute[280321]: 2026-02-23 09:44:09.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:09 localhost nova_compute[280321]: 2026-02-23 09:44:09.909 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:44:09 localhost nova_compute[280321]: 2026-02-23 09:44:09.911 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:44:09 localhost nova_compute[280321]: 2026-02-23 09:44:09.911 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:44:09 localhost nova_compute[280321]: 2026-02-23 09:44:09.912 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:44:09 localhost nova_compute[280321]: 2026-02-23 09:44:09.912 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:44:10 localhost sshd[285569]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.355 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:44:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:44:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.529 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.531 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12869MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.531 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.531 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:44:10 localhost podman[285574]: 2026-02-23 09:44:10.579754073 +0000 UTC m=+0.070152262 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 23 04:44:10 localhost podman[285574]: 2026-02-23 09:44:10.59572527 +0000 UTC m=+0.086123479 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9) Feb 23 04:44:10 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:44:10 localhost podman[285573]: 2026-02-23 09:44:10.637809724 +0000 UTC m=+0.128146661 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:44:10 localhost podman[285573]: 2026-02-23 09:44:10.675764721 +0000 UTC m=+0.166101678 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:44:10 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.715 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.716 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.810 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing inventories for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.887 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating ProviderTree inventory for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.888 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.903 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing aggregate associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.932 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing trait associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, traits: HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SHA,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NODE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_ABM,HW_CPU_X86_AESNI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 23 04:44:10 localhost nova_compute[280321]: 2026-02-23 09:44:10.949 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:44:11 localhost nova_compute[280321]: 2026-02-23 09:44:11.398 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:44:11 localhost nova_compute[280321]: 2026-02-23 09:44:11.403 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:44:11 localhost nova_compute[280321]: 2026-02-23 09:44:11.421 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:44:11 localhost nova_compute[280321]: 2026-02-23 09:44:11.422 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:44:11 localhost nova_compute[280321]: 2026-02-23 09:44:11.422 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:44:12 localhost nova_compute[280321]: 2026-02-23 09:44:12.423 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:12 localhost nova_compute[280321]: 2026-02-23 09:44:12.423 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:12 localhost nova_compute[280321]: 2026-02-23 09:44:12.424 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:44:12 localhost nova_compute[280321]: 2026-02-23 09:44:12.424 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:44:12 localhost nova_compute[280321]: 2026-02-23 09:44:12.445 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:44:12 localhost nova_compute[280321]: 2026-02-23 09:44:12.446 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:12 localhost nova_compute[280321]: 2026-02-23 09:44:12.446 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:12 localhost nova_compute[280321]: 2026-02-23 09:44:12.447 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:12 localhost nova_compute[280321]: 2026-02-23 09:44:12.447 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:44:12 localhost podman[241086]: time="2026-02-23T09:44:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:44:12 localhost podman[241086]: @ - - [23/Feb/2026:09:44:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149553 "" "Go-http-client/1.1" Feb 23 04:44:12 localhost podman[241086]: @ - - [23/Feb/2026:09:44:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16805 "" "Go-http-client/1.1" Feb 23 04:44:12 localhost nova_compute[280321]: 2026-02-23 09:44:12.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:44:13 localhost podman[285715]: Feb 23 04:44:13 localhost podman[285715]: 2026-02-23 09:44:13.711840871 +0000 UTC m=+0.062035884 container create 7b01389105e08075508a45c968d378387552aea11b9b22c76a7923d6a3783474 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_saha, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, release=1770267347, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:44:13 localhost systemd[1]: Started libpod-conmon-7b01389105e08075508a45c968d378387552aea11b9b22c76a7923d6a3783474.scope. Feb 23 04:44:13 localhost systemd[1]: Started libcrun container. Feb 23 04:44:13 localhost podman[285715]: 2026-02-23 09:44:13.680743162 +0000 UTC m=+0.030938195 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:44:13 localhost podman[285715]: 2026-02-23 09:44:13.791252804 +0000 UTC m=+0.141447817 container init 7b01389105e08075508a45c968d378387552aea11b9b22c76a7923d6a3783474 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_saha, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, com.redhat.component=rhceph-container, RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2) Feb 23 04:44:13 localhost podman[285715]: 2026-02-23 09:44:13.805294363 +0000 UTC m=+0.155489376 container start 7b01389105e08075508a45c968d378387552aea11b9b22c76a7923d6a3783474 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_saha, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, release=1770267347, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph) Feb 23 04:44:13 localhost podman[285715]: 2026-02-23 09:44:13.805656984 +0000 UTC m=+0.155852047 container attach 7b01389105e08075508a45c968d378387552aea11b9b22c76a7923d6a3783474 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_saha, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, release=1770267347, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=) Feb 23 04:44:13 localhost systemd[1]: libpod-7b01389105e08075508a45c968d378387552aea11b9b22c76a7923d6a3783474.scope: Deactivated successfully. Feb 23 04:44:13 localhost vigilant_saha[285731]: 167 167 Feb 23 04:44:13 localhost podman[285715]: 2026-02-23 09:44:13.807986685 +0000 UTC m=+0.158181698 container died 7b01389105e08075508a45c968d378387552aea11b9b22c76a7923d6a3783474 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_saha, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 04:44:13 localhost podman[285736]: 2026-02-23 09:44:13.904714746 +0000 UTC m=+0.085811420 container remove 7b01389105e08075508a45c968d378387552aea11b9b22c76a7923d6a3783474 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_saha, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.42.2, io.openshift.expose-services=, release=1770267347, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main) Feb 23 04:44:13 localhost systemd[1]: libpod-conmon-7b01389105e08075508a45c968d378387552aea11b9b22c76a7923d6a3783474.scope: Deactivated successfully. Feb 23 04:44:13 localhost systemd[1]: Reloading. Feb 23 04:44:14 localhost systemd-sysv-generator[285780]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:44:14 localhost systemd-rc-local-generator[285776]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: var-lib-containers-storage-overlay-be7277c94c3a5443bfcf955b0cacee0873740201abac425f0913cd00f8274983-merged.mount: Deactivated successfully. Feb 23 04:44:14 localhost systemd[1]: Reloading. Feb 23 04:44:14 localhost systemd-rc-local-generator[285818]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:44:14 localhost systemd-sysv-generator[285823]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:14 localhost systemd[1]: Starting Ceph mgr.np0005626465.hlpkwo for f1fea371-cb69-578d-a3d0-b5c472a84b46... Feb 23 04:44:15 localhost podman[285886]: Feb 23 04:44:15 localhost podman[285886]: 2026-02-23 09:44:15.097179608 +0000 UTC m=+0.070571754 container create 0c24b55400248b2a22a0d144392e83cc40f97a50fab9c591604f847f622e963a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1770267347, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:44:15 localhost systemd[1]: tmp-crun.9HAGBi.mount: Deactivated successfully. Feb 23 04:44:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587bd78b09ef8822057b8734e99eb7860673f222dace1722197bfe8d230df0f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587bd78b09ef8822057b8734e99eb7860673f222dace1722197bfe8d230df0f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587bd78b09ef8822057b8734e99eb7860673f222dace1722197bfe8d230df0f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587bd78b09ef8822057b8734e99eb7860673f222dace1722197bfe8d230df0f4/merged/var/lib/ceph/mgr/ceph-np0005626465.hlpkwo supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:15 localhost podman[285886]: 2026-02-23 09:44:15.153161266 +0000 UTC m=+0.126553432 container init 0c24b55400248b2a22a0d144392e83cc40f97a50fab9c591604f847f622e963a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vendor=Red Hat, Inc., name=rhceph, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:44:15 localhost podman[285886]: 2026-02-23 09:44:15.164600735 +0000 UTC m=+0.137992911 container start 0c24b55400248b2a22a0d144392e83cc40f97a50fab9c591604f847f622e963a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1770267347, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph) Feb 23 04:44:15 localhost podman[285886]: 2026-02-23 09:44:15.069790242 +0000 UTC m=+0.043182458 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:44:15 localhost bash[285886]: 0c24b55400248b2a22a0d144392e83cc40f97a50fab9c591604f847f622e963a Feb 23 04:44:15 localhost systemd[1]: Started Ceph mgr.np0005626465.hlpkwo for f1fea371-cb69-578d-a3d0-b5c472a84b46. Feb 23 04:44:15 localhost ceph-mgr[285904]: set uid:gid to 167:167 (ceph:ceph) Feb 23 04:44:15 localhost ceph-mgr[285904]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mgr, pid 2 Feb 23 04:44:15 localhost ceph-mgr[285904]: pidfile_write: ignore empty --pid-file Feb 23 04:44:15 localhost ceph-mgr[285904]: mgr[py] Loading python module 'alerts' Feb 23 04:44:15 localhost ceph-mgr[285904]: mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 23 04:44:15 localhost ceph-mgr[285904]: mgr[py] Loading python module 'balancer' Feb 23 04:44:15 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:15.352+0000 7f33f860e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 23 04:44:15 localhost ceph-mgr[285904]: mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 23 04:44:15 localhost ceph-mgr[285904]: mgr[py] Loading python module 'cephadm' Feb 23 04:44:15 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:15.419+0000 7f33f860e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 23 04:44:15 localhost systemd[1]: tmp-crun.oT1JUh.mount: Deactivated successfully. Feb 23 04:44:15 localhost ceph-mgr[285904]: mgr[py] Loading python module 'crash' Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Module crash has missing NOTIFY_TYPES member Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Loading python module 'dashboard' Feb 23 04:44:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:16.044+0000 7f33f860e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Loading python module 'devicehealth' Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Loading python module 'diskprediction_local' Feb 23 04:44:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:16.573+0000 7f33f860e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 23 04:44:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Feb 23 04:44:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Feb 23 04:44:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: from numpy import show_config as show_numpy_config Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Loading python module 'influx' Feb 23 04:44:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:16.705+0000 7f33f860e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Module influx has missing NOTIFY_TYPES member Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Loading python module 'insights' Feb 23 04:44:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:16.763+0000 7f33f860e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Loading python module 'iostat' Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 23 04:44:16 localhost ceph-mgr[285904]: mgr[py] Loading python module 'k8sevents' Feb 23 04:44:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:16.874+0000 7f33f860e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Loading python module 'localpool' Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Loading python module 'mds_autoscaler' Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Loading python module 'mirroring' Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Loading python module 'nfs' Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Loading python module 'orchestrator' Feb 23 04:44:17 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:17.580+0000 7f33f860e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Loading python module 'osd_perf_query' Feb 23 04:44:17 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:17.722+0000 7f33f860e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Loading python module 'osd_support' Feb 23 04:44:17 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:17.785+0000 7f33f860e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Loading python module 'pg_autoscaler' Feb 23 04:44:17 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:17.839+0000 7f33f860e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Loading python module 'progress' Feb 23 04:44:17 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:17.909+0000 7f33f860e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Module progress has missing NOTIFY_TYPES member Feb 23 04:44:17 localhost ceph-mgr[285904]: mgr[py] Loading python module 'prometheus' Feb 23 04:44:17 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:17.976+0000 7f33f860e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Feb 23 04:44:18 localhost podman[285950]: 2026-02-23 09:44:18.018416193 +0000 UTC m=+0.088251333 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:44:18 localhost podman[285950]: 2026-02-23 09:44:18.122776347 +0000 UTC m=+0.192611427 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20260216, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible) Feb 23 04:44:18 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:44:18 localhost ceph-mgr[285904]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 23 04:44:18 localhost ceph-mgr[285904]: mgr[py] Loading python module 'rbd_support' Feb 23 04:44:18 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:18.280+0000 7f33f860e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 23 04:44:18 localhost ceph-mgr[285904]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 23 04:44:18 localhost ceph-mgr[285904]: mgr[py] Loading python module 'restful' Feb 23 04:44:18 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:18.362+0000 7f33f860e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 23 04:44:18 localhost ceph-mgr[285904]: mgr[py] Loading python module 'rgw' Feb 23 04:44:18 localhost ceph-mgr[285904]: mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 23 04:44:18 localhost ceph-mgr[285904]: mgr[py] Loading python module 'rook' Feb 23 04:44:18 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:18.700+0000 7f33f860e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost podman[286082]: 2026-02-23 09:44:19.002626461 +0000 UTC m=+0.098872377 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, distribution-scope=public, vcs-type=git, version=7, architecture=x86_64, build-date=2026-02-09T10:25:24Z, release=1770267347, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main) Feb 23 04:44:19 localhost podman[286082]: 2026-02-23 09:44:19.10977153 +0000 UTC m=+0.206017426 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, release=1770267347, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True) Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Module rook has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Loading python module 'selftest' Feb 23 04:44:19 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:19.118+0000 7f33f860e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Loading python module 'snap_schedule' Feb 23 04:44:19 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:19.181+0000 7f33f860e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Loading python module 'stats' Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Loading python module 'status' Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Module status has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Loading python module 'telegraf' Feb 23 04:44:19 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:19.379+0000 7f33f860e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Loading python module 'telemetry' Feb 23 04:44:19 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:19.443+0000 7f33f860e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Loading python module 'test_orchestrator' Feb 23 04:44:19 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:19.575+0000 7f33f860e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Loading python module 'volumes' Feb 23 04:44:19 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:19.721+0000 7f33f860e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Loading python module 'zabbix' Feb 23 04:44:19 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:19.920+0000 7f33f860e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:44:19.981+0000 7f33f860e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 23 04:44:19 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x55ef9c167600 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Feb 23 04:44:19 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.103:6800/920472675 Feb 23 04:44:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:44:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:44:28 localhost podman[286771]: 2026-02-23 09:44:28.242603719 +0000 UTC m=+0.093064800 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:44:28 localhost podman[286771]: 2026-02-23 09:44:28.252773569 +0000 UTC m=+0.103234650 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS) Feb 23 04:44:28 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:44:28 localhost podman[286772]: 2026-02-23 09:44:28.346725585 +0000 UTC m=+0.196570488 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:44:28 localhost podman[286772]: 2026-02-23 09:44:28.360746263 +0000 UTC m=+0.210591196 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 23 04:44:28 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:44:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:44:31 localhost podman[287005]: 2026-02-23 09:44:31.701865099 +0000 UTC m=+0.080956041 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:44:31 localhost podman[287005]: 2026-02-23 09:44:31.740864099 +0000 UTC m=+0.119955071 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:44:31 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:44:31 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x55ef9c167600 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Feb 23 04:44:31 localhost openstack_network_exporter[243519]: ERROR 09:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:44:31 localhost openstack_network_exporter[243519]: Feb 23 04:44:31 localhost openstack_network_exporter[243519]: ERROR 09:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:44:31 localhost openstack_network_exporter[243519]: Feb 23 04:44:32 localhost podman[287087]: Feb 23 04:44:32 localhost podman[287087]: 2026-02-23 09:44:32.309212349 +0000 UTC m=+0.078540487 container create ffc45774d966bb4c2ce6faeae83265132ab67703f3849bdcb1042870add632c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_khayyam, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, name=rhceph, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:44:32 localhost systemd[1]: Started libpod-conmon-ffc45774d966bb4c2ce6faeae83265132ab67703f3849bdcb1042870add632c8.scope. Feb 23 04:44:32 localhost systemd[1]: Started libcrun container. Feb 23 04:44:32 localhost podman[287087]: 2026-02-23 09:44:32.279063749 +0000 UTC m=+0.048391917 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:44:32 localhost podman[287087]: 2026-02-23 09:44:32.379542885 +0000 UTC m=+0.148871013 container init ffc45774d966bb4c2ce6faeae83265132ab67703f3849bdcb1042870add632c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_khayyam, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347) Feb 23 04:44:32 localhost podman[287087]: 2026-02-23 09:44:32.387225229 +0000 UTC m=+0.156553427 container start ffc45774d966bb4c2ce6faeae83265132ab67703f3849bdcb1042870add632c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_khayyam, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, RELEASE=main, version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, release=1770267347, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:44:32 localhost podman[287087]: 2026-02-23 09:44:32.387504978 +0000 UTC m=+0.156833106 container attach ffc45774d966bb4c2ce6faeae83265132ab67703f3849bdcb1042870add632c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_khayyam, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Feb 23 04:44:32 localhost elastic_khayyam[287102]: 167 167 Feb 23 04:44:32 localhost systemd[1]: libpod-ffc45774d966bb4c2ce6faeae83265132ab67703f3849bdcb1042870add632c8.scope: Deactivated successfully. Feb 23 04:44:32 localhost podman[287087]: 2026-02-23 09:44:32.392109537 +0000 UTC m=+0.161437655 container died ffc45774d966bb4c2ce6faeae83265132ab67703f3849bdcb1042870add632c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_khayyam, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container) Feb 23 04:44:32 localhost podman[287107]: 2026-02-23 09:44:32.489219461 +0000 UTC m=+0.084344445 container remove ffc45774d966bb4c2ce6faeae83265132ab67703f3849bdcb1042870add632c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_khayyam, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , RELEASE=main, distribution-scope=public, architecture=x86_64, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, version=7, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:44:32 localhost systemd[1]: libpod-conmon-ffc45774d966bb4c2ce6faeae83265132ab67703f3849bdcb1042870add632c8.scope: Deactivated successfully. Feb 23 04:44:32 localhost podman[287124]: Feb 23 04:44:32 localhost podman[287124]: 2026-02-23 09:44:32.597938188 +0000 UTC m=+0.076224867 container create 5b83d4691567998d30bbb51e452ffc5fbc0535ef2c210afd9c2577bc1070e542 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sanderson, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, distribution-scope=public) Feb 23 04:44:32 localhost systemd[1]: Started libpod-conmon-5b83d4691567998d30bbb51e452ffc5fbc0535ef2c210afd9c2577bc1070e542.scope. Feb 23 04:44:32 localhost systemd[1]: Started libcrun container. Feb 23 04:44:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edbe9260af051a742e93efe097f19fb21ac6a01e05ca9067131b389d6b32b0b9/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edbe9260af051a742e93efe097f19fb21ac6a01e05ca9067131b389d6b32b0b9/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edbe9260af051a742e93efe097f19fb21ac6a01e05ca9067131b389d6b32b0b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edbe9260af051a742e93efe097f19fb21ac6a01e05ca9067131b389d6b32b0b9/merged/var/lib/ceph/mon/ceph-np0005626465 supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:32 localhost podman[287124]: 2026-02-23 09:44:32.663782636 +0000 UTC m=+0.142069345 container init 5b83d4691567998d30bbb51e452ffc5fbc0535ef2c210afd9c2577bc1070e542 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sanderson, version=7, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main) Feb 23 04:44:32 localhost podman[287124]: 2026-02-23 09:44:32.565778776 +0000 UTC m=+0.044065525 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:44:32 localhost podman[287124]: 2026-02-23 09:44:32.675159153 +0000 UTC m=+0.153445832 container start 5b83d4691567998d30bbb51e452ffc5fbc0535ef2c210afd9c2577bc1070e542 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sanderson, ceph=True, io.openshift.tags=rhceph ceph, release=1770267347, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:44:32 localhost podman[287124]: 2026-02-23 09:44:32.675498784 +0000 UTC m=+0.153785573 container attach 5b83d4691567998d30bbb51e452ffc5fbc0535ef2c210afd9c2577bc1070e542 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sanderson, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, release=1770267347) Feb 23 04:44:32 localhost systemd[1]: var-lib-containers-storage-overlay-334e1b3c9078f0a95613c78ba9eb9bedf93ea1a7ce7891ce131ff1390a542a12-merged.mount: Deactivated successfully. Feb 23 04:44:32 localhost systemd[1]: libpod-5b83d4691567998d30bbb51e452ffc5fbc0535ef2c210afd9c2577bc1070e542.scope: Deactivated successfully. Feb 23 04:44:32 localhost podman[287165]: 2026-02-23 09:44:32.786224092 +0000 UTC m=+0.050531263 container died 5b83d4691567998d30bbb51e452ffc5fbc0535ef2c210afd9c2577bc1070e542 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sanderson, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public) Feb 23 04:44:32 localhost systemd[1]: var-lib-containers-storage-overlay-edbe9260af051a742e93efe097f19fb21ac6a01e05ca9067131b389d6b32b0b9-merged.mount: Deactivated successfully. Feb 23 04:44:32 localhost podman[287165]: 2026-02-23 09:44:32.816094824 +0000 UTC m=+0.080401925 container remove 5b83d4691567998d30bbb51e452ffc5fbc0535ef2c210afd9c2577bc1070e542 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sanderson, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, version=7, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Feb 23 04:44:32 localhost systemd[1]: libpod-conmon-5b83d4691567998d30bbb51e452ffc5fbc0535ef2c210afd9c2577bc1070e542.scope: Deactivated successfully. Feb 23 04:44:32 localhost systemd[1]: Reloading. Feb 23 04:44:32 localhost systemd-sysv-generator[287206]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:44:32 localhost systemd-rc-local-generator[287202]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: Reloading. Feb 23 04:44:33 localhost systemd-rc-local-generator[287246]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:44:33 localhost systemd-sysv-generator[287251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:44:33 localhost systemd[1]: Starting Ceph mon.np0005626465 for f1fea371-cb69-578d-a3d0-b5c472a84b46... Feb 23 04:44:33 localhost podman[287311]: Feb 23 04:44:33 localhost podman[287311]: 2026-02-23 09:44:33.913503335 +0000 UTC m=+0.077938198 container create 287dcf2f52ac7b0ed8c97be7c6f99602c1d420eb44e0956a4e6532f5eecf9db9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mon-np0005626465, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.buildah.version=1.42.2, release=1770267347, build-date=2026-02-09T10:25:24Z, name=rhceph) Feb 23 04:44:33 localhost systemd[1]: tmp-crun.R0Oz3Y.mount: Deactivated successfully. Feb 23 04:44:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62f72e87ac8094e8f004778b4c585735df79e1ee74bc56fd93f47429037219a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62f72e87ac8094e8f004778b4c585735df79e1ee74bc56fd93f47429037219a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62f72e87ac8094e8f004778b4c585735df79e1ee74bc56fd93f47429037219a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62f72e87ac8094e8f004778b4c585735df79e1ee74bc56fd93f47429037219a/merged/var/lib/ceph/mon/ceph-np0005626465 supports timestamps until 2038 (0x7fffffff) Feb 23 04:44:33 localhost podman[287311]: 2026-02-23 09:44:33.878621691 +0000 UTC m=+0.043056584 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:44:33 localhost podman[287311]: 2026-02-23 09:44:33.981141979 +0000 UTC m=+0.145576842 container init 287dcf2f52ac7b0ed8c97be7c6f99602c1d420eb44e0956a4e6532f5eecf9db9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mon-np0005626465, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 04:44:33 localhost podman[287311]: 2026-02-23 09:44:33.991264357 +0000 UTC m=+0.155699220 container start 287dcf2f52ac7b0ed8c97be7c6f99602c1d420eb44e0956a4e6532f5eecf9db9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mon-np0005626465, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, RELEASE=main, release=1770267347, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z) Feb 23 04:44:33 localhost bash[287311]: 287dcf2f52ac7b0ed8c97be7c6f99602c1d420eb44e0956a4e6532f5eecf9db9 Feb 23 04:44:33 localhost systemd[1]: Started Ceph mon.np0005626465 for f1fea371-cb69-578d-a3d0-b5c472a84b46. Feb 23 04:44:34 localhost ceph-mon[287329]: set uid:gid to 167:167 (ceph:ceph) Feb 23 04:44:34 localhost ceph-mon[287329]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mon, pid 2 Feb 23 04:44:34 localhost ceph-mon[287329]: pidfile_write: ignore empty --pid-file Feb 23 04:44:34 localhost ceph-mon[287329]: load: jerasure load: lrc Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: RocksDB version: 7.9.2 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Git sha 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: DB SUMMARY Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: DB Session ID: YG0VANVTEI8CVHQGQH5D Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: CURRENT file: CURRENT Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: IDENTITY file: IDENTITY Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005626465/store.db dir, Total Num: 0, files: Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005626465/store.db: 000004.log size: 761 ; Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.error_if_exists: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.create_if_missing: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.paranoid_checks: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.env: 0x56423d27da20 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.fs: PosixFileSystem Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.info_log: 0x56423ee7ad20 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_file_opening_threads: 16 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.statistics: (nil) Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.use_fsync: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_log_file_size: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.log_file_time_to_roll: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.keep_log_file_num: 1000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.recycle_log_file_num: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.allow_fallocate: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.allow_mmap_reads: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.allow_mmap_writes: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.use_direct_reads: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.create_missing_column_families: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.db_log_dir: Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.wal_dir: Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.table_cache_numshardbits: 6 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.advise_random_on_open: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.db_write_buffer_size: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.write_buffer_manager: 0x56423ee8b540 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.use_adaptive_mutex: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.rate_limiter: (nil) Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.wal_recovery_mode: 2 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.enable_thread_tracking: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.enable_pipelined_write: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.unordered_write: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.row_cache: None Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.wal_filter: None Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.allow_ingest_behind: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.two_write_queues: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.manual_wal_flush: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.wal_compression: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.atomic_flush: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.persist_stats_to_disk: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.log_readahead_size: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.best_efforts_recovery: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.allow_data_in_errors: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.db_host_id: __hostname__ Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.enforce_single_del_contracts: true Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_background_jobs: 2 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_background_compactions: -1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_subcompactions: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.delayed_write_rate : 16777216 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_total_wal_size: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.stats_dump_period_sec: 600 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.stats_persist_period_sec: 600 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_open_files: -1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bytes_per_sync: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_readahead_size: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_background_flushes: -1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Compression algorithms supported: Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: #011kZSTD supported: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: #011kXpressCompression supported: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: #011kBZip2Compression supported: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: #011kLZ4Compression supported: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: #011kZlibCompression supported: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: #011kSnappyCompression supported: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: DMutex implementation: pthread_mutex_t Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005626465/store.db/MANIFEST-000005 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.merge_operator: Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_filter: None Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_filter_factory: None Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.sst_partitioner_factory: None Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56423ee7a980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56423ee77350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.write_buffer_size: 33554432 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_write_buffer_number: 2 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compression: NoCompression Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bottommost_compression: Disabled Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.prefix_extractor: nullptr Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.num_levels: 7 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compression_opts.level: 32767 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compression_opts.enabled: false Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_bytes_for_level_base: 268435456 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.arena_block_size: 1048576 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.table_properties_collectors: Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.inplace_update_support: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.bloom_locality: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.max_successive_merges: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.force_consistency_checks: 1 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.ttl: 2592000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.enable_blob_files: false Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.min_blob_size: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.blob_file_size: 268435456 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005626465/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f63238e2-844d-4c49-b660-105bb635e407 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839874043589, "job": 1, "event": "recovery_started", "wal_files": [4]} Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839874046113, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771839874, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839874046222, "job": 1, "event": "recovery_finished"} Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56423ee9ee00 Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: DB pointer 0x56423ef94000 Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465 does not exist in monmap, will attempt to join an existing cluster Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:44:34 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.84 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Sum 1/0 1.84 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56423ee77350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Feb 23 04:44:34 localhost ceph-mon[287329]: using public_addr v2:172.18.0.107:0/0 -> [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] Feb 23 04:44:34 localhost ceph-mon[287329]: starting mon.np0005626465 rank -1 at public addrs [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] at bind addrs [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005626465 fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465@-1(???) e0 preinit fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465@-1(synchronizing) e4 sync_obtain_latest_monmap Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465@-1(synchronizing) e4 sync_obtain_latest_monmap obtained monmap e4 Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465@-1(synchronizing).mds e17 new map Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465@-1(synchronizing).mds e17 print_map#012e17#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01116#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-23T07:57:46.097663+0000#012modified#0112026-02-23T09:43:29.529267+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01179#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26518}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26518 members: 26518#012[mds.mds.np0005626463.qcthuc{0:26518} state up:active seq 13 addr [v2:172.18.0.106:6808/2515508693,v1:172.18.0.106:6809/2515508693] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005626465.drvnoy{-1:26498} state up:standby seq 1 addr [v2:172.18.0.107:6808/2939113664,v1:172.18.0.107:6809/2939113664] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005626466.vaywlp{-1:26506} state up:standby seq 1 addr [v2:172.18.0.108:6808/2035422599,v1:172.18.0.108:6809/2035422599] compat {c=[1],r=[1],i=[17ff]}] Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465@-1(synchronizing).osd e80 crush map has features 3314933000854323200, adjusting msgr requires Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465@-1(synchronizing).osd e80 crush map has features 432629239337189376, adjusting msgr requires Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465@-1(synchronizing).osd e80 crush map has features 432629239337189376, adjusting msgr requires Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465@-1(synchronizing).osd e80 crush map has features 432629239337189376, adjusting msgr requires Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label mgr to host np0005626463.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626463.localdomain to 3396M Feb 23 04:44:34 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626465.localdomain to 3396M Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626466.localdomain to 3396M Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label mgr to host np0005626465.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label mgr to host np0005626466.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Saving service mgr spec with placement label:mgr Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Feb 23 04:44:34 localhost ceph-mon[287329]: Deploying daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Feb 23 04:44:34 localhost ceph-mon[287329]: Deploying daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label mon to host np0005626459.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label _admin to host np0005626459.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Feb 23 04:44:34 localhost ceph-mon[287329]: Deploying daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label mon to host np0005626460.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label _admin to host np0005626460.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label mon to host np0005626461.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label _admin to host np0005626461.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label mon to host np0005626463.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label _admin to host np0005626463.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:44:34 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label mon to host np0005626465.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:44:34 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label _admin to host np0005626465.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label mon to host np0005626466.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:44:34 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: Added label _admin to host np0005626466.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:44:34 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:34 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:44:34 localhost ceph-mon[287329]: Saving service mon spec with placement label:mon Feb 23 04:44:34 localhost ceph-mon[287329]: Deploying daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:44:34 localhost ceph-mon[287329]: mon.np0005626465@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Feb 23 04:44:38 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x55ef9c1671e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Feb 23 04:44:39 localhost ceph-mon[287329]: mon.np0005626465@-1(probing) e4 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Feb 23 04:44:39 localhost ceph-mon[287329]: mon.np0005626465@-1(probing) e4 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Feb 23 04:44:40 localhost ceph-mon[287329]: mon.np0005626465@-1(probing) e5 my rank is now 4 (was -1) Feb 23 04:44:40 localhost ceph-mon[287329]: log_channel(cluster) log [INF] : mon.np0005626465 calling monitor election Feb 23 04:44:40 localhost ceph-mon[287329]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 Feb 23 04:44:40 localhost ceph-mon[287329]: mon.np0005626465@4(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:44:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:44:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:44:41 localhost podman[287368]: 2026-02-23 09:44:41.011193873 +0000 UTC m=+0.085256393 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:44:41 localhost podman[287368]: 2026-02-23 09:44:41.019376802 +0000 UTC m=+0.093439292 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:44:41 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:44:41 localhost podman[287369]: 2026-02-23 09:44:41.080137326 +0000 UTC m=+0.146992866 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, distribution-scope=public, config_id=openstack_network_exporter, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1770267347, version=9.7, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc.) Feb 23 04:44:41 localhost podman[287369]: 2026-02-23 09:44:41.121762477 +0000 UTC m=+0.188618047 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, release=1770267347, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 23 04:44:41 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:44:41 localhost ceph-mon[287329]: mon.np0005626465@4(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Feb 23 04:44:42 localhost podman[241086]: time="2026-02-23T09:44:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:44:42 localhost podman[241086]: @ - - [23/Feb/2026:09:44:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:44:42 localhost podman[241086]: @ - - [23/Feb/2026:09:44:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17768 "" "Go-http-client/1.1" Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626465@4(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e5 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e5 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Feb 23 04:44:43 localhost ceph-mon[287329]: Deploying daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626461 calling monitor election Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626459 calling monitor election Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626460 calling monitor election Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626466 calling monitor election Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626459 is new leader, mons np0005626459,np0005626461,np0005626460,np0005626466 in quorum (ranks 0,1,2,3) Feb 23 04:44:43 localhost ceph-mon[287329]: overall HEALTH_OK Feb 23 04:44:43 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:43 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:43 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:43 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:44:43 localhost ceph-mon[287329]: Deploying daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:44:43 localhost ceph-mon[287329]: mgrc update_daemon_metadata mon.np0005626465 metadata {addrs=[v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable),ceph_version_short=18.2.1-381.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005626465.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005626465.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626459 calling monitor election Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626460 calling monitor election Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626461 calling monitor election Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626466 calling monitor election Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626465 calling monitor election Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626459 is new leader, mons np0005626459,np0005626461,np0005626460,np0005626466,np0005626465 in quorum (ranks 0,1,2,3,4) Feb 23 04:44:43 localhost ceph-mon[287329]: overall HEALTH_OK Feb 23 04:44:43 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:43 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Feb 23 04:44:43 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x55ef9c167600 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Feb 23 04:44:43 localhost ceph-mon[287329]: log_channel(cluster) log [INF] : mon.np0005626465 calling monitor election Feb 23 04:44:43 localhost ceph-mon[287329]: paxos.4).electionLogic(22) init, last seen epoch 22 Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626465@4(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:44:43 localhost ceph-mon[287329]: mon.np0005626465@4(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:44:44 localhost systemd[1]: tmp-crun.Z3RY01.mount: Deactivated successfully. Feb 23 04:44:44 localhost podman[287537]: 2026-02-23 09:44:44.995416249 +0000 UTC m=+0.093556745 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, release=1770267347, ceph=True, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64) Feb 23 04:44:45 localhost podman[287537]: 2026-02-23 09:44:45.099263237 +0000 UTC m=+0.197403713 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_BRANCH=main) Feb 23 04:44:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:44:48.300 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:44:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:44:48.302 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:44:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:44:48.302 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:44:48 localhost ceph-mon[287329]: mon.np0005626465@4(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:44:48 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:44:48 localhost ceph-mon[287329]: mon.np0005626459 calling monitor election Feb 23 04:44:48 localhost ceph-mon[287329]: mon.np0005626460 calling monitor election Feb 23 04:44:48 localhost ceph-mon[287329]: mon.np0005626461 calling monitor election Feb 23 04:44:48 localhost ceph-mon[287329]: mon.np0005626466 calling monitor election Feb 23 04:44:48 localhost ceph-mon[287329]: mon.np0005626465 calling monitor election Feb 23 04:44:48 localhost ceph-mon[287329]: mon.np0005626463 calling monitor election Feb 23 04:44:48 localhost ceph-mon[287329]: mon.np0005626459 is new leader, mons np0005626459,np0005626461,np0005626460,np0005626466,np0005626465,np0005626463 in quorum (ranks 0,1,2,3,4,5) Feb 23 04:44:48 localhost ceph-mon[287329]: overall HEALTH_OK Feb 23 04:44:48 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:48 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:44:48 localhost systemd[1]: tmp-crun.OasWDK.mount: Deactivated successfully. Feb 23 04:44:49 localhost podman[287675]: 2026-02-23 09:44:49.003493673 +0000 UTC m=+0.090953625 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller) Feb 23 04:44:49 localhost podman[287675]: 2026-02-23 09:44:49.111112616 +0000 UTC m=+0.198572568 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:44:49 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:44:49 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:49 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:49 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:49 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:49 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:49 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:44:49 localhost ceph-mon[287329]: Updating np0005626459.localdomain:/etc/ceph/ceph.conf Feb 23 04:44:49 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:44:49 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:44:49 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:44:49 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:44:49 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:44:51 localhost ceph-mon[287329]: Updating np0005626459.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:44:51 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:44:51 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:44:51 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:44:51 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:44:51 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:44:51 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:44:51 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:51 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:51 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:51 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:51 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:51 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:51 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:51 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:51 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:51 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:52 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:44:52 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:52 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:52 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:52 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:44:53 localhost ceph-mon[287329]: Reconfiguring mon.np0005626459 (monmap changed)... Feb 23 04:44:53 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626459 on np0005626459.localdomain Feb 23 04:44:53 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:53 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:53 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626459.pmtxxl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:44:54 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626459.pmtxxl (monmap changed)... Feb 23 04:44:54 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626459.pmtxxl on np0005626459.localdomain Feb 23 04:44:54 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:54 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:54 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626459", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:44:54 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:55 localhost ceph-mon[287329]: Reconfiguring crash.np0005626459 (monmap changed)... Feb 23 04:44:55 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626459 on np0005626459.localdomain Feb 23 04:44:55 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:55 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:55 localhost ceph-mon[287329]: Reconfiguring crash.np0005626460 (monmap changed)... Feb 23 04:44:55 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:44:55 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626460 on np0005626460.localdomain Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:44:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:44:56 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:56 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:56 localhost ceph-mon[287329]: Reconfiguring mon.np0005626460 (monmap changed)... Feb 23 04:44:56 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:44:56 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626460 on np0005626460.localdomain Feb 23 04:44:56 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon).osd e80 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon).osd e80 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon).osd e81 e81: 6 total, 6 up, 6 in Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626459"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626459"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626460"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626460"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626461"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626461"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626465"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626465"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626466"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626466"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005626465.drvnoy"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mds metadata", "who": "mds.np0005626465.drvnoy"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon).mds e17 all = 0 Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005626466.vaywlp"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mds metadata", "who": "mds.np0005626466.vaywlp"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon).mds e17 all = 0 Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005626463.qcthuc"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mds metadata", "who": "mds.np0005626463.qcthuc"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon).mds e17 all = 0 Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626461.lrfquh", "id": "np0005626461.lrfquh"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr metadata", "who": "np0005626461.lrfquh", "id": "np0005626461.lrfquh"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626460.fyrady", "id": "np0005626460.fyrady"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr metadata", "who": "np0005626460.fyrady", "id": "np0005626460.fyrady"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626465.hlpkwo", "id": "np0005626465.hlpkwo"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr metadata", "who": "np0005626465.hlpkwo", "id": "np0005626465.hlpkwo"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626463.wtksup", "id": "np0005626463.wtksup"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr metadata", "who": "np0005626463.wtksup", "id": "np0005626463.wtksup"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626466.nisqfq", "id": "np0005626466.nisqfq"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr metadata", "who": "np0005626466.nisqfq", "id": "np0005626466.nisqfq"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd metadata", "id": 0} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd metadata", "id": 1} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd metadata", "id": 2} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd metadata", "id": 3} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd metadata", "id": 4} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd metadata", "id": 5} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mds metadata"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mds metadata"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon).mds e17 all = 1 Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "osd metadata"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd metadata"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mon metadata"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata"} : dispatch Feb 23 04:44:57 localhost systemd[1]: session-19.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd[1]: session-18.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd-logind[759]: Session 19 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd[1]: session-20.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd-logind[759]: Session 18 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd-logind[759]: Session 20 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd[1]: session-21.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd[1]: session-17.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd[1]: session-23.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd[1]: session-24.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd[1]: session-14.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd[1]: session-16.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd-logind[759]: Session 21 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd-logind[759]: Session 17 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd-logind[759]: Session 14 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd-logind[759]: Session 23 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd-logind[759]: Session 16 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd-logind[759]: Session 24 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd[1]: session-26.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd[1]: session-26.scope: Consumed 3min 18.032s CPU time. Feb 23 04:44:57 localhost systemd[1]: session-22.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd[1]: session-25.scope: Deactivated successfully. Feb 23 04:44:57 localhost systemd-logind[759]: Session 22 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd-logind[759]: Session 26 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd-logind[759]: Session 25 logged out. Waiting for processes to exit. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 19. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 18. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 20. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 21. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 17. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 23. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 24. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 14. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 16. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 26. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 22. Feb 23 04:44:57 localhost systemd-logind[759]: Removed session 25. Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626461.lrfquh/mirror_snapshot_schedule"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626461.lrfquh/mirror_snapshot_schedule"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626461.lrfquh/trash_purge_schedule"} v 0) Feb 23 04:44:57 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626461.lrfquh/trash_purge_schedule"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' Feb 23 04:44:57 localhost ceph-mon[287329]: from='mgr.14120 172.18.0.103:0/3790934138' entity='mgr.np0005626459.pmtxxl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: from='client.? 172.18.0.103:0/2046273284' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: Activating manager daemon np0005626461.lrfquh Feb 23 04:44:57 localhost ceph-mon[287329]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 23 04:44:57 localhost ceph-mon[287329]: Manager daemon np0005626461.lrfquh is now available Feb 23 04:44:57 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626461.lrfquh/mirror_snapshot_schedule"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626461.lrfquh/mirror_snapshot_schedule"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626461.lrfquh/trash_purge_schedule"} : dispatch Feb 23 04:44:57 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626461.lrfquh/trash_purge_schedule"} : dispatch Feb 23 04:44:57 localhost sshd[288020]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:44:57 localhost systemd-logind[759]: New session 63 of user ceph-admin. Feb 23 04:44:57 localhost systemd[1]: Started Session 63 of User ceph-admin. Feb 23 04:44:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:44:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:44:58 localhost podman[288104]: 2026-02-23 09:44:58.534046477 +0000 UTC m=+0.088730518 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:44:58 localhost podman[288103]: 2026-02-23 09:44:58.58824073 +0000 UTC m=+0.142817117 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 04:44:58 localhost podman[288103]: 2026-02-23 09:44:58.599736132 +0000 UTC m=+0.154312509 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:44:58 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:44:58 localhost podman[288104]: 2026-02-23 09:44:58.668745977 +0000 UTC m=+0.223430008 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 23 04:44:58 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:44:58 localhost podman[288165]: 2026-02-23 09:44:58.817567247 +0000 UTC m=+0.092560605 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64, build-date=2026-02-09T10:25:24Z, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=1770267347, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True) Feb 23 04:44:58 localhost podman[288165]: 2026-02-23 09:44:58.920338663 +0000 UTC m=+0.195332011 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.42.2, vendor=Red Hat, Inc.) Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon).osd e81 _set_new_cache_sizes cache_size:1019693270 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:44:59 localhost sshd[288282]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626459.localdomain.devices.0}] v 0) Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626459.localdomain}] v 0) Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:44:59 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:45:00 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:45:00 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:45:00 localhost ceph-mon[287329]: [23/Feb/2026:09:44:59] ENGINE Bus STARTING Feb 23 04:45:00 localhost ceph-mon[287329]: [23/Feb/2026:09:44:59] ENGINE Serving on https://172.18.0.105:7150 Feb 23 04:45:00 localhost ceph-mon[287329]: [23/Feb/2026:09:44:59] ENGINE Client ('172.18.0.105', 36526) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 23 04:45:00 localhost ceph-mon[287329]: [23/Feb/2026:09:44:59] ENGINE Serving on http://172.18.0.105:8765 Feb 23 04:45:00 localhost ceph-mon[287329]: [23/Feb/2026:09:44:59] ENGINE Bus STARTED Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:00 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626459.localdomain.devices.0}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626459.localdomain}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005626460", "name": "osd_memory_target"} v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd/host:np0005626460", "name": "osd_memory_target"} : dispatch Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005626459", "name": "osd_memory_target"} v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd/host:np0005626459", "name": "osd_memory_target"} : dispatch Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 04:45:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:45:01 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:45:02 localhost openstack_network_exporter[243519]: ERROR 09:45:02 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:45:02 localhost openstack_network_exporter[243519]: Feb 23 04:45:02 localhost openstack_network_exporter[243519]: ERROR 09:45:02 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:45:02 localhost openstack_network_exporter[243519]: Feb 23 04:45:02 localhost systemd[1]: tmp-crun.iIdRQn.mount: Deactivated successfully. Feb 23 04:45:02 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:45:02 localhost podman[288407]: 2026-02-23 09:45:02.027296155 +0000 UTC m=+0.100123216 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:45:02 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} v 0) Feb 23 04:45:02 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost podman[288407]: 2026-02-23 09:45:02.034072032 +0000 UTC m=+0.106899093 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:45:02 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:02 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:45:02 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:45:02 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd/host:np0005626460", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd/host:np0005626460", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd/host:np0005626459", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd/host:np0005626459", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 04:45:02 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 04:45:02 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 04:45:02 localhost ceph-mon[287329]: Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 04:45:02 localhost ceph-mon[287329]: Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:45:02 localhost ceph-mon[287329]: Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} : dispatch Feb 23 04:45:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:45:03 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626459.pmtxxl", "id": "np0005626459.pmtxxl"} v 0) Feb 23 04:45:03 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr metadata", "who": "np0005626459.pmtxxl", "id": "np0005626459.pmtxxl"} : dispatch Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626459.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626459.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:03 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon).osd e81 _set_new_cache_sizes cache_size:1020047494 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:45:04 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:04 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:45:04 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:45:04 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:45:04 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:45:04 localhost ceph-mon[287329]: Updating np0005626459.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:45:04 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626459.localdomain.devices.0}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626459.localdomain}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:45:04 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:05 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:45:05 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:45:05 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:45:05 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:45:05 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:05 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:45:05 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:45:05 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:05 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:05 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:45:05 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:45:05 localhost ceph-mon[287329]: Updating np0005626459.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:45:05 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:45:05 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:45:05 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:05 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:06 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:45:06 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:45:06 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:45:06 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:06 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:45:06 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:45:06 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:06 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:06 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626460.fyrady (monmap changed)... Feb 23 04:45:06 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626460.fyrady on np0005626460.localdomain Feb 23 04:45:06 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:06 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:06 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:06 localhost sshd[289089]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:45:07 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:45:07 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:45:07 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:45:07 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:45:07 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:07 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:45:07 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:45:07 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:07 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:07 localhost ceph-mon[287329]: Reconfiguring mon.np0005626461 (monmap changed)... Feb 23 04:45:07 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626461 on np0005626461.localdomain Feb 23 04:45:07 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:07 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:07 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:07 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:07 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:08 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:45:08 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:45:08 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:45:08 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:08 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:08 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:08 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626461.lrfquh (monmap changed)... Feb 23 04:45:08 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626461.lrfquh on np0005626461.localdomain Feb 23 04:45:08 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:08 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:08 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:08 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:08 localhost nova_compute[280321]: 2026-02-23 09:45:08.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:45:09 localhost ceph-mon[287329]: mon.np0005626465@4(peon).osd e81 _set_new_cache_sizes cache_size:1020054583 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:45:09 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:45:09 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:45:09 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:45:09 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:09 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:09 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:09 localhost ceph-mon[287329]: Reconfiguring crash.np0005626461 (monmap changed)... Feb 23 04:45:09 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626461 on np0005626461.localdomain Feb 23 04:45:09 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:09 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:09 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:09 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:09 localhost nova_compute[280321]: 2026-02-23 09:45:09.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:45:09 localhost nova_compute[280321]: 2026-02-23 09:45:09.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:45:09 localhost nova_compute[280321]: 2026-02-23 09:45:09.912 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:45:09 localhost nova_compute[280321]: 2026-02-23 09:45:09.912 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:45:09 localhost nova_compute[280321]: 2026-02-23 09:45:09.913 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:45:09 localhost nova_compute[280321]: 2026-02-23 09:45:09.913 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:45:09 localhost nova_compute[280321]: 2026-02-23 09:45:09.913 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:45:10 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:10 localhost nova_compute[280321]: 2026-02-23 09:45:10.376 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:45:10 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:10 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Feb 23 04:45:10 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:45:10 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:10 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:10 localhost nova_compute[280321]: 2026-02-23 09:45:10.562 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:45:10 localhost nova_compute[280321]: 2026-02-23 09:45:10.564 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12429MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:45:10 localhost nova_compute[280321]: 2026-02-23 09:45:10.565 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:45:10 localhost nova_compute[280321]: 2026-02-23 09:45:10.565 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:45:10 localhost nova_compute[280321]: 2026-02-23 09:45:10.637 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:45:10 localhost nova_compute[280321]: 2026-02-23 09:45:10.638 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:45:10 localhost nova_compute[280321]: 2026-02-23 09:45:10.665 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:45:10 localhost ceph-mon[287329]: Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:45:10 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:45:10 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:10 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:10 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.855738) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839910855815, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11060, "num_deletes": 526, "total_data_size": 15191896, "memory_usage": 15785152, "flush_reason": "Manual Compaction"} Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839910906746, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 10764067, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11065, "table_properties": {"data_size": 10711405, "index_size": 27340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24261, "raw_key_size": 255202, "raw_average_key_size": 26, "raw_value_size": 10548045, "raw_average_value_size": 1087, "num_data_blocks": 1030, "num_entries": 9702, "num_filter_entries": 9702, "num_deletions": 525, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771839874, "oldest_key_time": 1771839874, "file_creation_time": 1771839910, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 51085 microseconds, and 22977 cpu microseconds. Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.906823) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 10764067 bytes OK Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.906849) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.908703) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.908726) EVENT_LOG_v1 {"time_micros": 1771839910908720, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.908747) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 15117502, prev total WAL file size 15117502, number of live WAL files 2. Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.910955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end) Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(10MB) 8(1887B)] Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839910911036, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 10765954, "oldest_snapshot_seqno": -1} Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9180 keys, 10756090 bytes, temperature: kUnknown Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839910982756, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 10756090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10704756, "index_size": 27297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 246593, "raw_average_key_size": 26, "raw_value_size": 10548103, "raw_average_value_size": 1149, "num_data_blocks": 1028, "num_entries": 9180, "num_filter_entries": 9180, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771839874, "oldest_key_time": 0, "file_creation_time": 1771839910, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.983241) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 10756090 bytes Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.985166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.5 rd, 149.4 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(10.3, 0.0 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 9707, records dropped: 527 output_compression: NoCompression Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.985197) EVENT_LOG_v1 {"time_micros": 1771839910985184, "job": 4, "event": "compaction_finished", "compaction_time_micros": 71998, "compaction_time_cpu_micros": 29021, "output_level": 6, "num_output_files": 1, "total_output_size": 10756090, "num_input_records": 9707, "num_output_records": 9180, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839910987841, "job": 4, "event": "table_file_deletion", "file_number": 14} Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839910988228, "job": 4, "event": "table_file_deletion", "file_number": 8} Feb 23 04:45:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:45:10.910902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:45:11 localhost nova_compute[280321]: 2026-02-23 09:45:11.101 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:45:11 localhost nova_compute[280321]: 2026-02-23 09:45:11.108 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:45:11 localhost nova_compute[280321]: 2026-02-23 09:45:11.131 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:45:11 localhost nova_compute[280321]: 2026-02-23 09:45:11.134 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:45:11 localhost nova_compute[280321]: 2026-02-23 09:45:11.135 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:45:11 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:11 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:11 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Feb 23 04:45:11 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:45:11 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:11 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:11 localhost ceph-mon[287329]: Reconfiguring osd.2 (monmap changed)... Feb 23 04:45:11 localhost ceph-mon[287329]: Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:45:11 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:11 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:11 localhost ceph-mon[287329]: Reconfiguring osd.5 (monmap changed)... Feb 23 04:45:11 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:45:11 localhost ceph-mon[287329]: Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:45:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:45:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:45:12 localhost systemd[1]: tmp-crun.lsHhXg.mount: Deactivated successfully. Feb 23 04:45:12 localhost podman[289136]: 2026-02-23 09:45:12.007104265 +0000 UTC m=+0.076424333 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 04:45:12 localhost podman[289136]: 2026-02-23 09:45:12.020156852 +0000 UTC m=+0.089476940 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public) Feb 23 04:45:12 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:45:12 localhost nova_compute[280321]: 2026-02-23 09:45:12.136 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:45:12 localhost nova_compute[280321]: 2026-02-23 09:45:12.137 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:45:12 localhost nova_compute[280321]: 2026-02-23 09:45:12.137 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:45:12 localhost nova_compute[280321]: 2026-02-23 09:45:12.137 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:45:12 localhost systemd[1]: tmp-crun.Oe5ndD.mount: Deactivated successfully. Feb 23 04:45:12 localhost podman[289135]: 2026-02-23 09:45:12.146171837 +0000 UTC m=+0.215073523 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:45:12 localhost nova_compute[280321]: 2026-02-23 09:45:12.153 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:45:12 localhost nova_compute[280321]: 2026-02-23 09:45:12.153 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:45:12 localhost nova_compute[280321]: 2026-02-23 09:45:12.154 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:45:12 localhost nova_compute[280321]: 2026-02-23 09:45:12.154 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:45:12 localhost podman[289135]: 2026-02-23 09:45:12.155007337 +0000 UTC m=+0.223909023 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:45:12 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:12 localhost podman[241086]: time="2026-02-23T09:45:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:45:12 localhost podman[241086]: @ - - [23/Feb/2026:09:45:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:45:12 localhost podman[241086]: @ - - [23/Feb/2026:09:45:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17776 "" "Go-http-client/1.1" Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "quorum_status"} v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "quorum_status"} : dispatch Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e6 handle_command mon_command({"prefix": "mon rm", "name": "np0005626459"} v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon rm", "name": "np0005626459"} : dispatch Feb 23 04:45:12 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x55ef9c167600 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@4(peon) e7 my rank is now 3 (was 4) Feb 23 04:45:12 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Feb 23 04:45:12 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Feb 23 04:45:12 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x55efa5ac4000 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@3(probing) e7 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626460"} v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626460"} : dispatch Feb 23 04:45:12 localhost ceph-mon[287329]: log_channel(cluster) log [INF] : mon.np0005626465 calling monitor election Feb 23 04:45:12 localhost ceph-mon[287329]: paxos.3).electionLogic(26) init, last seen epoch 26 Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e7 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626461"} v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626461"} : dispatch Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e7 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e7 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626465"} v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626465"} : dispatch Feb 23 04:45:12 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e7 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626466"} v 0) Feb 23 04:45:12 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626466"} : dispatch Feb 23 04:45:12 localhost nova_compute[280321]: 2026-02-23 09:45:12.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:45:12 localhost nova_compute[280321]: 2026-02-23 09:45:12.908 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:45:13 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:13 localhost nova_compute[280321]: 2026-02-23 09:45:13.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:45:18 localhost ceph-mon[287329]: paxos.3).electionLogic(27) init, last seen epoch 27, mid-election, bumping Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626466 calling monitor election Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626461 calling monitor election Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626460 calling monitor election Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626463 calling monitor election Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626461 is new leader, mons np0005626461,np0005626460,np0005626466,np0005626463 in quorum (ranks 0,1,2,4) Feb 23 04:45:18 localhost ceph-mon[287329]: Health check failed: 1/5 mons down, quorum np0005626461,np0005626460,np0005626466,np0005626463 (MON_DOWN) Feb 23 04:45:18 localhost ceph-mon[287329]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005626461,np0005626460,np0005626466,np0005626463 Feb 23 04:45:18 localhost ceph-mon[287329]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005626461,np0005626460,np0005626466,np0005626463 Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626465 (rank 3) addr [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] is down (out of quorum) Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:45:18 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:45:18 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:45:18 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:45:18 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:45:18 localhost ceph-mon[287329]: Remove daemons mon.np0005626459 Feb 23 04:45:18 localhost ceph-mon[287329]: Safe to remove mon.np0005626459: new quorum should be ['np0005626461', 'np0005626460', 'np0005626466', 'np0005626465', 'np0005626463'] (from ['np0005626461', 'np0005626460', 'np0005626466', 'np0005626465', 'np0005626463']) Feb 23 04:45:18 localhost ceph-mon[287329]: Removing monitor np0005626459 from monmap... Feb 23 04:45:18 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon rm", "name": "np0005626459"} : dispatch Feb 23 04:45:18 localhost ceph-mon[287329]: Removing daemon mon.np0005626459 from np0005626459.localdomain -- ports [] Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626465 calling monitor election Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626460 calling monitor election Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626461 calling monitor election Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626466 calling monitor election Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626461 is new leader, mons np0005626461,np0005626460,np0005626466,np0005626465,np0005626463 in quorum (ranks 0,1,2,3,4) Feb 23 04:45:18 localhost ceph-mon[287329]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005626461,np0005626460,np0005626466,np0005626463) Feb 23 04:45:18 localhost ceph-mon[287329]: Cluster is now healthy Feb 23 04:45:18 localhost ceph-mon[287329]: overall HEALTH_OK Feb 23 04:45:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:18 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:18 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:18 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054729 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:45:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:45:19 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:45:19 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:45:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:19 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:45:19 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:45:19 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:45:19 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:19 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:19 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:20 localhost podman[289177]: 2026-02-23 09:45:20.005674907 +0000 UTC m=+0.079735014 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:45:20 localhost podman[289177]: 2026-02-23 09:45:20.072051422 +0000 UTC m=+0.146111489 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 04:45:20 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:45:20 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:45:20 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:20 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:20 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:45:20 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:20 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:20 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:21 localhost podman[289256]: Feb 23 04:45:21 localhost podman[289256]: 2026-02-23 09:45:21.325478574 +0000 UTC m=+0.077813136 container create aeb0f4ba7d28dd44d5136493ce2438ccc4717f40d197073a36780162851d6ebb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_germain, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, RELEASE=main, name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z) Feb 23 04:45:21 localhost systemd[1]: Started libpod-conmon-aeb0f4ba7d28dd44d5136493ce2438ccc4717f40d197073a36780162851d6ebb.scope. Feb 23 04:45:21 localhost systemd[1]: Started libcrun container. Feb 23 04:45:21 localhost podman[289256]: 2026-02-23 09:45:21.389575299 +0000 UTC m=+0.141909861 container init aeb0f4ba7d28dd44d5136493ce2438ccc4717f40d197073a36780162851d6ebb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_germain, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, version=7, name=rhceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=Red Hat Ceph Storage 7) Feb 23 04:45:21 localhost podman[289256]: 2026-02-23 09:45:21.293296351 +0000 UTC m=+0.045630963 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:45:21 localhost podman[289256]: 2026-02-23 09:45:21.395688676 +0000 UTC m=+0.148023218 container start aeb0f4ba7d28dd44d5136493ce2438ccc4717f40d197073a36780162851d6ebb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_germain, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, release=1770267347, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , RELEASE=main, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2) Feb 23 04:45:21 localhost podman[289256]: 2026-02-23 09:45:21.395946453 +0000 UTC m=+0.148281046 container attach aeb0f4ba7d28dd44d5136493ce2438ccc4717f40d197073a36780162851d6ebb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_germain, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1770267347, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=) Feb 23 04:45:21 localhost zen_germain[289271]: 167 167 Feb 23 04:45:21 localhost systemd[1]: libpod-aeb0f4ba7d28dd44d5136493ce2438ccc4717f40d197073a36780162851d6ebb.scope: Deactivated successfully. Feb 23 04:45:21 localhost podman[289256]: 2026-02-23 09:45:21.399281605 +0000 UTC m=+0.151616177 container died aeb0f4ba7d28dd44d5136493ce2438ccc4717f40d197073a36780162851d6ebb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_germain, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:45:21 localhost ceph-mon[287329]: Reconfiguring mon.np0005626463 (monmap changed)... Feb 23 04:45:21 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:45:21 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:21 localhost ceph-mon[287329]: Removed label mon from host np0005626459.localdomain Feb 23 04:45:21 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:21 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:21 localhost ceph-mon[287329]: Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:45:21 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:21 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:21 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:45:21 localhost systemd[1]: var-lib-containers-storage-overlay-9c541a5317dbfa9c31332da7fb8aa64aeb1cd664559dc1cd54f5a90d5acc1b50-merged.mount: Deactivated successfully. Feb 23 04:45:21 localhost podman[289276]: 2026-02-23 09:45:21.484602469 +0000 UTC m=+0.074830405 container remove aeb0f4ba7d28dd44d5136493ce2438ccc4717f40d197073a36780162851d6ebb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_germain, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., version=7, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, release=1770267347, ceph=True, io.openshift.expose-services=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:45:21 localhost systemd[1]: libpod-conmon-aeb0f4ba7d28dd44d5136493ce2438ccc4717f40d197073a36780162851d6ebb.scope: Deactivated successfully. Feb 23 04:45:21 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:45:21 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:45:21 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Feb 23 04:45:21 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:45:21 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:21 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:21 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:45:22 localhost podman[289348]: Feb 23 04:45:22 localhost podman[289348]: 2026-02-23 09:45:22.123709848 +0000 UTC m=+0.074542026 container create 55486b2ef88c31fa098c4d5818b11ab10b3cb29c2538c5b1f322f72b67929a49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, RELEASE=main, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph) Feb 23 04:45:22 localhost systemd[1]: Started libpod-conmon-55486b2ef88c31fa098c4d5818b11ab10b3cb29c2538c5b1f322f72b67929a49.scope. Feb 23 04:45:22 localhost systemd[1]: Started libcrun container. Feb 23 04:45:22 localhost podman[289348]: 2026-02-23 09:45:22.093849096 +0000 UTC m=+0.044681274 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:45:22 localhost podman[289348]: 2026-02-23 09:45:22.214474466 +0000 UTC m=+0.165306634 container init 55486b2ef88c31fa098c4d5818b11ab10b3cb29c2538c5b1f322f72b67929a49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, release=1770267347, vendor=Red Hat, Inc., name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:45:22 localhost podman[289348]: 2026-02-23 09:45:22.223154951 +0000 UTC m=+0.173987169 container start 55486b2ef88c31fa098c4d5818b11ab10b3cb29c2538c5b1f322f72b67929a49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, ceph=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.42.2, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph) Feb 23 04:45:22 localhost podman[289348]: 2026-02-23 09:45:22.223274035 +0000 UTC m=+0.174106233 container attach 55486b2ef88c31fa098c4d5818b11ab10b3cb29c2538c5b1f322f72b67929a49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7) Feb 23 04:45:22 localhost practical_hypatia[289363]: 167 167 Feb 23 04:45:22 localhost systemd[1]: libpod-55486b2ef88c31fa098c4d5818b11ab10b3cb29c2538c5b1f322f72b67929a49.scope: Deactivated successfully. Feb 23 04:45:22 localhost podman[289348]: 2026-02-23 09:45:22.225456091 +0000 UTC m=+0.176288319 container died 55486b2ef88c31fa098c4d5818b11ab10b3cb29c2538c5b1f322f72b67929a49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, release=1770267347, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , architecture=x86_64) Feb 23 04:45:22 localhost podman[289368]: 2026-02-23 09:45:22.289330131 +0000 UTC m=+0.057182456 container remove 55486b2ef88c31fa098c4d5818b11ab10b3cb29c2538c5b1f322f72b67929a49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, release=1770267347, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:45:22 localhost systemd[1]: libpod-conmon-55486b2ef88c31fa098c4d5818b11ab10b3cb29c2538c5b1f322f72b67929a49.scope: Deactivated successfully. Feb 23 04:45:22 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:45:22 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:45:22 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Feb 23 04:45:22 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:45:22 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:22 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:22 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:22 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:22 localhost ceph-mon[287329]: Reconfiguring osd.0 (monmap changed)... Feb 23 04:45:22 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:45:22 localhost ceph-mon[287329]: Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:45:22 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:22 localhost ceph-mon[287329]: Removed label mgr from host np0005626459.localdomain Feb 23 04:45:22 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:22 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:22 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:45:22 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:45:23 localhost podman[289445]: Feb 23 04:45:23 localhost podman[289445]: 2026-02-23 09:45:23.031953406 +0000 UTC m=+0.064028537 container create 97f95a3ce2c4b5703f114102b7688b0a0ebe7b1bcfbd7ae8bf169c08b2efc5dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mayer, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, name=rhceph, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:45:23 localhost systemd[1]: Started libpod-conmon-97f95a3ce2c4b5703f114102b7688b0a0ebe7b1bcfbd7ae8bf169c08b2efc5dc.scope. Feb 23 04:45:23 localhost systemd[1]: Started libcrun container. Feb 23 04:45:23 localhost podman[289445]: 2026-02-23 09:45:23.093377213 +0000 UTC m=+0.125452354 container init 97f95a3ce2c4b5703f114102b7688b0a0ebe7b1bcfbd7ae8bf169c08b2efc5dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mayer, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7) Feb 23 04:45:23 localhost podman[289445]: 2026-02-23 09:45:23.002763715 +0000 UTC m=+0.034838896 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:45:23 localhost podman[289445]: 2026-02-23 09:45:23.102974735 +0000 UTC m=+0.135049876 container start 97f95a3ce2c4b5703f114102b7688b0a0ebe7b1bcfbd7ae8bf169c08b2efc5dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mayer, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, architecture=x86_64, release=1770267347, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git) Feb 23 04:45:23 localhost podman[289445]: 2026-02-23 09:45:23.103309576 +0000 UTC m=+0.135384727 container attach 97f95a3ce2c4b5703f114102b7688b0a0ebe7b1bcfbd7ae8bf169c08b2efc5dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mayer, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_CLEAN=True, ceph=True, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347) Feb 23 04:45:23 localhost eloquent_mayer[289459]: 167 167 Feb 23 04:45:23 localhost systemd[1]: libpod-97f95a3ce2c4b5703f114102b7688b0a0ebe7b1bcfbd7ae8bf169c08b2efc5dc.scope: Deactivated successfully. Feb 23 04:45:23 localhost podman[289445]: 2026-02-23 09:45:23.106575556 +0000 UTC m=+0.138650697 container died 97f95a3ce2c4b5703f114102b7688b0a0ebe7b1bcfbd7ae8bf169c08b2efc5dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mayer, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, RELEASE=main, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Feb 23 04:45:23 localhost podman[289464]: 2026-02-23 09:45:23.196235105 +0000 UTC m=+0.079214042 container remove 97f95a3ce2c4b5703f114102b7688b0a0ebe7b1bcfbd7ae8bf169c08b2efc5dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mayer, maintainer=Guillaume Abrioux , release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vendor=Red Hat, Inc.) Feb 23 04:45:23 localhost systemd[1]: libpod-conmon-97f95a3ce2c4b5703f114102b7688b0a0ebe7b1bcfbd7ae8bf169c08b2efc5dc.scope: Deactivated successfully. Feb 23 04:45:23 localhost systemd[1]: var-lib-containers-storage-overlay-dfee05d6334004b1bf61c1c614379e56333d2d91efc55187cc458db4f2e05548-merged.mount: Deactivated successfully. Feb 23 04:45:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:45:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:45:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 23 04:45:23 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:45:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:23 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:23 localhost ceph-mon[287329]: Reconfiguring osd.3 (monmap changed)... Feb 23 04:45:23 localhost ceph-mon[287329]: Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:45:23 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:23 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:23 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:23 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:45:23 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:45:24 localhost podman[289541]: Feb 23 04:45:24 localhost podman[289541]: 2026-02-23 09:45:24.016034828 +0000 UTC m=+0.074062984 container create 2687277296bcd654775d8db3e2300be215253f43c07774511f139a7fccfa5a71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_moore, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_BRANCH=main, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, architecture=x86_64, release=1770267347, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7) Feb 23 04:45:24 localhost systemd[1]: Started libpod-conmon-2687277296bcd654775d8db3e2300be215253f43c07774511f139a7fccfa5a71.scope. Feb 23 04:45:24 localhost systemd[1]: Started libcrun container. Feb 23 04:45:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:45:24 localhost podman[289541]: 2026-02-23 09:45:24.079206928 +0000 UTC m=+0.137235024 container init 2687277296bcd654775d8db3e2300be215253f43c07774511f139a7fccfa5a71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_moore, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1770267347, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, CEPH_POINT_RELEASE=, name=rhceph, architecture=x86_64, version=7) Feb 23 04:45:24 localhost podman[289541]: 2026-02-23 09:45:23.986496286 +0000 UTC m=+0.044524422 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:45:24 localhost podman[289541]: 2026-02-23 09:45:24.088626025 +0000 UTC m=+0.146654121 container start 2687277296bcd654775d8db3e2300be215253f43c07774511f139a7fccfa5a71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_moore, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.component=rhceph-container) Feb 23 04:45:24 localhost boring_moore[289556]: 167 167 Feb 23 04:45:24 localhost systemd[1]: libpod-2687277296bcd654775d8db3e2300be215253f43c07774511f139a7fccfa5a71.scope: Deactivated successfully. Feb 23 04:45:24 localhost podman[289541]: 2026-02-23 09:45:24.088842102 +0000 UTC m=+0.146870198 container attach 2687277296bcd654775d8db3e2300be215253f43c07774511f139a7fccfa5a71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_moore, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, release=1770267347, GIT_CLEAN=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, distribution-scope=public, version=7) Feb 23 04:45:24 localhost podman[289541]: 2026-02-23 09:45:24.095128704 +0000 UTC m=+0.153156800 container died 2687277296bcd654775d8db3e2300be215253f43c07774511f139a7fccfa5a71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_moore, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , ceph=True, distribution-scope=public, io.buildah.version=1.42.2, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:45:24 localhost podman[289561]: 2026-02-23 09:45:24.181140731 +0000 UTC m=+0.078103377 container remove 2687277296bcd654775d8db3e2300be215253f43c07774511f139a7fccfa5a71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_moore, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , release=1770267347, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.42.2, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, RELEASE=main) Feb 23 04:45:24 localhost systemd[1]: libpod-conmon-2687277296bcd654775d8db3e2300be215253f43c07774511f139a7fccfa5a71.scope: Deactivated successfully. Feb 23 04:45:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:45:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:45:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:45:24 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:45:24 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:45:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:24 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:24 localhost systemd[1]: var-lib-containers-storage-overlay-305104eedd8fbf113ee329cce984a22446d77bc1b55d81ece1c9652fe8196b43-merged.mount: Deactivated successfully. Feb 23 04:45:24 localhost ceph-mon[287329]: Removed label _admin from host np0005626459.localdomain Feb 23 04:45:24 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:45:24 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:45:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:24 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:24 localhost podman[289629]: Feb 23 04:45:24 localhost podman[289629]: 2026-02-23 09:45:24.903132427 +0000 UTC m=+0.076402945 container create 5cf72cfecfc60a3d01facbe59459c5b8774cf0256d750435ede370db19c5ce4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_wescoff, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=1770267347, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True) Feb 23 04:45:24 localhost systemd[1]: Started libpod-conmon-5cf72cfecfc60a3d01facbe59459c5b8774cf0256d750435ede370db19c5ce4f.scope. Feb 23 04:45:24 localhost systemd[1]: Started libcrun container. Feb 23 04:45:24 localhost podman[289629]: 2026-02-23 09:45:24.968598076 +0000 UTC m=+0.141868584 container init 5cf72cfecfc60a3d01facbe59459c5b8774cf0256d750435ede370db19c5ce4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_wescoff, release=1770267347, version=7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:45:24 localhost podman[289629]: 2026-02-23 09:45:24.872104409 +0000 UTC m=+0.045374927 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:45:24 localhost podman[289629]: 2026-02-23 09:45:24.978055676 +0000 UTC m=+0.151326194 container start 5cf72cfecfc60a3d01facbe59459c5b8774cf0256d750435ede370db19c5ce4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_wescoff, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.42.2, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, io.openshift.tags=rhceph ceph, vcs-type=git) Feb 23 04:45:24 localhost podman[289629]: 2026-02-23 09:45:24.978312824 +0000 UTC m=+0.151583332 container attach 5cf72cfecfc60a3d01facbe59459c5b8774cf0256d750435ede370db19c5ce4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_wescoff, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container) Feb 23 04:45:24 localhost naughty_wescoff[289644]: 167 167 Feb 23 04:45:24 localhost systemd[1]: libpod-5cf72cfecfc60a3d01facbe59459c5b8774cf0256d750435ede370db19c5ce4f.scope: Deactivated successfully. Feb 23 04:45:24 localhost podman[289629]: 2026-02-23 09:45:24.981455789 +0000 UTC m=+0.154726307 container died 5cf72cfecfc60a3d01facbe59459c5b8774cf0256d750435ede370db19c5ce4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_wescoff, release=1770267347, GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, ceph=True, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64) Feb 23 04:45:25 localhost podman[289649]: 2026-02-23 09:45:25.074975756 +0000 UTC m=+0.081874162 container remove 5cf72cfecfc60a3d01facbe59459c5b8774cf0256d750435ede370db19c5ce4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_wescoff, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, maintainer=Guillaume Abrioux ) Feb 23 04:45:25 localhost systemd[1]: libpod-conmon-5cf72cfecfc60a3d01facbe59459c5b8774cf0256d750435ede370db19c5ce4f.scope: Deactivated successfully. Feb 23 04:45:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:45:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:45:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:45:25 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:45:25 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:45:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:25 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:25 localhost systemd[1]: var-lib-containers-storage-overlay-f36b4db0aa05879d27060de55879f801ec47a42bea7e08ca5495191f1c3f1da9-merged.mount: Deactivated successfully. Feb 23 04:45:25 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:45:25 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:45:25 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:25 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:25 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:25 localhost podman[289719]: Feb 23 04:45:25 localhost podman[289719]: 2026-02-23 09:45:25.769629306 +0000 UTC m=+0.073669731 container create 377ab1226d541e24f25d03fc978ed7672c76483afcdeb6c185615c53d369c0b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_hellman, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, release=1770267347, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 04:45:25 localhost systemd[1]: Started libpod-conmon-377ab1226d541e24f25d03fc978ed7672c76483afcdeb6c185615c53d369c0b4.scope. Feb 23 04:45:25 localhost systemd[1]: Started libcrun container. Feb 23 04:45:25 localhost podman[289719]: 2026-02-23 09:45:25.830008791 +0000 UTC m=+0.134049186 container init 377ab1226d541e24f25d03fc978ed7672c76483afcdeb6c185615c53d369c0b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_hellman, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, RELEASE=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Feb 23 04:45:25 localhost podman[289719]: 2026-02-23 09:45:25.739485966 +0000 UTC m=+0.043526391 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:45:25 localhost podman[289719]: 2026-02-23 09:45:25.840303426 +0000 UTC m=+0.144343821 container start 377ab1226d541e24f25d03fc978ed7672c76483afcdeb6c185615c53d369c0b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_hellman, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, version=7, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:45:25 localhost nifty_hellman[289734]: 167 167 Feb 23 04:45:25 localhost podman[289719]: 2026-02-23 09:45:25.840559564 +0000 UTC m=+0.144599959 container attach 377ab1226d541e24f25d03fc978ed7672c76483afcdeb6c185615c53d369c0b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_hellman, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2) Feb 23 04:45:25 localhost systemd[1]: libpod-377ab1226d541e24f25d03fc978ed7672c76483afcdeb6c185615c53d369c0b4.scope: Deactivated successfully. Feb 23 04:45:25 localhost podman[289719]: 2026-02-23 09:45:25.843107252 +0000 UTC m=+0.147147677 container died 377ab1226d541e24f25d03fc978ed7672c76483afcdeb6c185615c53d369c0b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_hellman, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, release=1770267347, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph) Feb 23 04:45:25 localhost podman[289739]: 2026-02-23 09:45:25.934966037 +0000 UTC m=+0.084390049 container remove 377ab1226d541e24f25d03fc978ed7672c76483afcdeb6c185615c53d369c0b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_hellman, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, architecture=x86_64, io.buildah.version=1.42.2) Feb 23 04:45:25 localhost systemd[1]: libpod-conmon-377ab1226d541e24f25d03fc978ed7672c76483afcdeb6c185615c53d369c0b4.scope: Deactivated successfully. Feb 23 04:45:26 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:45:26 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:45:26 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:45:26 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:26 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:26 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:26 localhost systemd[1]: var-lib-containers-storage-overlay-4003288b4d11925c5ffcc5c29793771e78c46b202ddc2d67a770c375c813d008-merged.mount: Deactivated successfully. Feb 23 04:45:26 localhost ceph-mon[287329]: Reconfiguring mon.np0005626465 (monmap changed)... Feb 23 04:45:26 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:45:26 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:26 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:26 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:26 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:26 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:45:26 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:45:26 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Feb 23 04:45:26 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:45:26 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:26 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:27 localhost ceph-mon[287329]: Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:45:27 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:45:27 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:27 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:27 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:45:27 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:45:27 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:45:27 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Feb 23 04:45:27 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:45:27 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:27 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:28 localhost ceph-mon[287329]: Reconfiguring osd.1 (monmap changed)... Feb 23 04:45:28 localhost ceph-mon[287329]: Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:45:28 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:28 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:28 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:45:28 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:45:28 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:45:28 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 23 04:45:28 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:45:28 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:28 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:45:29 localhost systemd[1]: tmp-crun.6AuIzi.mount: Deactivated successfully. Feb 23 04:45:29 localhost podman[289756]: 2026-02-23 09:45:29.0351451 +0000 UTC m=+0.100857662 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:45:29 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:45:29 localhost podman[289756]: 2026-02-23 09:45:29.069983934 +0000 UTC m=+0.135696506 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Feb 23 04:45:29 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:45:29 localhost systemd[1]: tmp-crun.ZEAdDm.mount: Deactivated successfully. Feb 23 04:45:29 localhost podman[289757]: 2026-02-23 09:45:29.127827812 +0000 UTC m=+0.193641737 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible) Feb 23 04:45:29 localhost podman[289757]: 2026-02-23 09:45:29.138007243 +0000 UTC m=+0.203821178 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:45:29 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:45:29 localhost ceph-mon[287329]: Reconfiguring osd.4 (monmap changed)... Feb 23 04:45:29 localhost ceph-mon[287329]: Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:45:29 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:29 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:29 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:45:29 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:45:29 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:45:29 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:45:29 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:45:29 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:29 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:45:29 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:45:29 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:29 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:30 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:45:30 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:45:30 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:45:30 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:30 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:45:30 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:45:30 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:30 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:30 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:45:30 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:45:30 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:30 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:30 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:45:30 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:30 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:30 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:45:30 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:30 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:30 localhost ceph-mon[287329]: Reconfiguring mon.np0005626466 (monmap changed)... Feb 23 04:45:30 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:30 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:45:31 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:45:31 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:45:31 localhost openstack_network_exporter[243519]: ERROR 09:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:45:31 localhost openstack_network_exporter[243519]: Feb 23 04:45:31 localhost openstack_network_exporter[243519]: ERROR 09:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:45:31 localhost openstack_network_exporter[243519]: Feb 23 04:45:32 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:32 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:45:32 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626459.localdomain.devices.0}] v 0) Feb 23 04:45:32 localhost podman[289796]: 2026-02-23 09:45:32.994564332 +0000 UTC m=+0.067725830 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:45:32 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626459.localdomain}] v 0) Feb 23 04:45:33 localhost podman[289796]: 2026-02-23 09:45:33.008860579 +0000 UTC m=+0.082022067 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:45:33 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:33 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:33 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:45:33 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:45:33 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:45:33 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626459.localdomain.devices.0}] v 0) Feb 23 04:45:33 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626459.localdomain}] v 0) Feb 23 04:45:33 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:33 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:33 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:45:33 localhost ceph-mon[287329]: Removing np0005626459.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:33 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:33 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:33 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:33 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:33 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:45:33 localhost ceph-mon[287329]: Removing np0005626459.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:45:33 localhost ceph-mon[287329]: Removing np0005626459.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:45:33 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:33 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:33 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:33 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:33 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:33 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:33 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:45:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: Added label _no_schedule to host np0005626459.localdomain Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005626459.localdomain Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:35 localhost ceph-mon[287329]: Removing daemon crash.np0005626459 from np0005626459.localdomain -- ports [] Feb 23 04:45:36 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth rm", "entity": "client.crash.np0005626459"} v 0) Feb 23 04:45:36 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth rm", "entity": "client.crash.np0005626459"} : dispatch Feb 23 04:45:36 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) Feb 23 04:45:36 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) Feb 23 04:45:36 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth rm", "entity": "client.crash.np0005626459"} : dispatch Feb 23 04:45:36 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth rm", "entity": "client.crash.np0005626459"} : dispatch Feb 23 04:45:36 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005626459"}]': finished Feb 23 04:45:36 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain"} v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain"} : dispatch Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005626459.pmtxxl"} v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth rm", "entity": "mgr.np0005626459.pmtxxl"} : dispatch Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: Removing key for client.crash.np0005626459 Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:37 localhost ceph-mon[287329]: Removing daemon mgr.np0005626459.pmtxxl from np0005626459.localdomain -- ports [9283, 8765] Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain"} : dispatch Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain"} : dispatch Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain"}]': finished Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth rm", "entity": "mgr.np0005626459.pmtxxl"} : dispatch Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth rm", "entity": "mgr.np0005626459.pmtxxl"} : dispatch Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005626459.pmtxxl"}]': finished Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:37 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:45:37 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:45:38 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:45:38 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:38 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:38 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:38 localhost ceph-mon[287329]: Removed host np0005626459.localdomain Feb 23 04:45:38 localhost ceph-mon[287329]: Removing key for mgr.np0005626459.pmtxxl Feb 23 04:45:38 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:45:38 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:38 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:38 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:39 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:45:39 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:45:39 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:45:39 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:45:39 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:39 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:45:39 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:45:39 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:39 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:39 localhost ceph-mon[287329]: Reconfiguring crash.np0005626460 (monmap changed)... Feb 23 04:45:39 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626460 on np0005626460.localdomain Feb 23 04:45:39 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:39 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:39 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:40 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:45:40 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:45:40 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:45:40 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:40 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:45:40 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:45:40 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:40 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:40 localhost ceph-mon[287329]: Reconfiguring mon.np0005626460 (monmap changed)... Feb 23 04:45:40 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626460 on np0005626460.localdomain Feb 23 04:45:40 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:40 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:40 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:40 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:41 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:45:41 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:45:41 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:45:41 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:41 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:45:41 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:45:41 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:41 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:41 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626460.fyrady (monmap changed)... Feb 23 04:45:41 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626460.fyrady on np0005626460.localdomain Feb 23 04:45:41 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:41 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:41 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:45:41 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:45:42 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:45:42 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:45:42 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:42 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:45:42 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:45:42 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:42 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:42 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:45:42 localhost podman[241086]: time="2026-02-23T09:45:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:45:42 localhost podman[241086]: @ - - [23/Feb/2026:09:45:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:45:42 localhost podman[241086]: @ - - [23/Feb/2026:09:45:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17780 "" "Go-http-client/1.1" Feb 23 04:45:42 localhost ceph-mon[287329]: Reconfiguring mon.np0005626461 (monmap changed)... Feb 23 04:45:42 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626461 on np0005626461.localdomain Feb 23 04:45:42 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:42 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:42 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626461.lrfquh (monmap changed)... Feb 23 04:45:42 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:42 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:42 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626461.lrfquh on np0005626461.localdomain Feb 23 04:45:42 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:45:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:45:43 localhost podman[290177]: 2026-02-23 09:45:42.999577567 +0000 UTC m=+0.067356578 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, release=1770267347, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:45:43 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:45:43 localhost podman[290176]: 2026-02-23 09:45:43.062852459 +0000 UTC m=+0.129947760 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:45:43 localhost podman[290176]: 2026-02-23 09:45:43.073881916 +0000 UTC m=+0.140977247 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:45:43 localhost podman[290177]: 2026-02-23 09:45:43.088160902 +0000 UTC m=+0.155939903 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, version=9.7, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7) Feb 23 04:45:43 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:45:43 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:45:43 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:45:43 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:45:43 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:43 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:43 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:44 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:45:44 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:44 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:44 localhost ceph-mon[287329]: Reconfiguring crash.np0005626461 (monmap changed)... Feb 23 04:45:44 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:44 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:44 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626461 on np0005626461.localdomain Feb 23 04:45:44 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:45:44 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:45:44 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:45:44 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:44 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:44 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:45 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:45 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:45 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 23 04:45:45 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Feb 23 04:45:45 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:45:45 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:45 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:45 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:45 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:45 localhost ceph-mon[287329]: Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:45:45 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:45 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:45 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:45:45 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:45 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:45 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:45:45 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:46 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:46 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:46 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Feb 23 04:45:46 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:45:46 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:46 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:46 localhost ceph-mon[287329]: Saving service mon spec with placement label:mon Feb 23 04:45:46 localhost ceph-mon[287329]: Reconfiguring osd.2 (monmap changed)... Feb 23 04:45:46 localhost ceph-mon[287329]: Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:45:46 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:46 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:46 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 23 04:45:47 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:47 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "quorum_status"} v 0) Feb 23 04:45:47 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "quorum_status"} : dispatch Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e7 handle_command mon_command({"prefix": "mon rm", "name": "np0005626463"} v 0) Feb 23 04:45:47 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon rm", "name": "np0005626463"} : dispatch Feb 23 04:45:47 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x55efa5ade000 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(probing) e8 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626460"} v 0) Feb 23 04:45:47 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626460"} : dispatch Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(probing) e8 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626461"} v 0) Feb 23 04:45:47 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626461"} : dispatch Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(probing) e8 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626465"} v 0) Feb 23 04:45:47 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626465"} : dispatch Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(probing) e8 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626466"} v 0) Feb 23 04:45:47 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626466"} : dispatch Feb 23 04:45:47 localhost ceph-mon[287329]: log_channel(cluster) log [INF] : mon.np0005626465 calling monitor election Feb 23 04:45:47 localhost ceph-mon[287329]: paxos.3).electionLogic(32) init, last seen epoch 32 Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:45:47 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:45:48 localhost sshd[290219]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:45:48 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:45:48.302 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:45:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:45:48.302 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:45:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:45:48.302 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:45:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:45:50 localhost systemd[1]: tmp-crun.WTDZTp.mount: Deactivated successfully. Feb 23 04:45:50 localhost podman[290221]: 2026-02-23 09:45:50.997644371 +0000 UTC m=+0.074791956 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0) Feb 23 04:45:51 localhost podman[290221]: 2026-02-23 09:45:51.066001599 +0000 UTC m=+0.143149194 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:45:51 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:45:52 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e8 handle_timecheck drop unexpected msg Feb 23 04:45:52 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:45:55 localhost ceph-mds[284726]: mds.beacon.mds.np0005626465.drvnoy missed beacon ack from the monitors Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:45:56 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:56 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:45:56 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:45:56 localhost ceph-mon[287329]: Remove daemons mon.np0005626463 Feb 23 04:45:56 localhost ceph-mon[287329]: Safe to remove mon.np0005626463: new quorum should be ['np0005626461', 'np0005626460', 'np0005626466', 'np0005626465'] (from ['np0005626461', 'np0005626460', 'np0005626466', 'np0005626465']) Feb 23 04:45:56 localhost ceph-mon[287329]: Removing monitor np0005626463 from monmap... Feb 23 04:45:56 localhost ceph-mon[287329]: Removing daemon mon.np0005626463 from np0005626463.localdomain -- ports [] Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626465 calling monitor election Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626461 calling monitor election Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626466 calling monitor election Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626460 calling monitor election Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626461 is new leader, mons np0005626461,np0005626466,np0005626465 in quorum (ranks 0,2,3) Feb 23 04:45:56 localhost ceph-mon[287329]: overall HEALTH_OK Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626461 calling monitor election Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626461 is new leader, mons np0005626461,np0005626460,np0005626466,np0005626465 in quorum (ranks 0,1,2,3) Feb 23 04:45:56 localhost ceph-mon[287329]: overall HEALTH_OK Feb 23 04:45:56 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:56 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:45:56 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:45:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:56 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:57 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:45:57 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:45:57 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:45:57 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:57 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:57 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:45:57 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:57 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:45:57 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:45:57 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:57 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:57 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:57 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:45:58 localhost podman[290299]: Feb 23 04:45:58 localhost podman[290299]: 2026-02-23 09:45:58.227326084 +0000 UTC m=+0.076840769 container create 0b8cf05f051bd7284d1c980439db91121ec250572907a5051eb64a73a93e34bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_germain, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, vcs-type=git) Feb 23 04:45:58 localhost systemd[1]: Started libpod-conmon-0b8cf05f051bd7284d1c980439db91121ec250572907a5051eb64a73a93e34bc.scope. Feb 23 04:45:58 localhost systemd[1]: Started libcrun container. Feb 23 04:45:58 localhost podman[290299]: 2026-02-23 09:45:58.197778141 +0000 UTC m=+0.047292836 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:45:58 localhost podman[290299]: 2026-02-23 09:45:58.299714184 +0000 UTC m=+0.149228869 container init 0b8cf05f051bd7284d1c980439db91121ec250572907a5051eb64a73a93e34bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_germain, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, io.buildah.version=1.42.2, version=7, vcs-type=git, architecture=x86_64, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 04:45:58 localhost podman[290299]: 2026-02-23 09:45:58.30969806 +0000 UTC m=+0.159212745 container start 0b8cf05f051bd7284d1c980439db91121ec250572907a5051eb64a73a93e34bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_germain, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=) Feb 23 04:45:58 localhost podman[290299]: 2026-02-23 09:45:58.309995179 +0000 UTC m=+0.159509914 container attach 0b8cf05f051bd7284d1c980439db91121ec250572907a5051eb64a73a93e34bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_germain, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, vcs-type=git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:45:58 localhost elastic_germain[290315]: 167 167 Feb 23 04:45:58 localhost systemd[1]: libpod-0b8cf05f051bd7284d1c980439db91121ec250572907a5051eb64a73a93e34bc.scope: Deactivated successfully. Feb 23 04:45:58 localhost podman[290299]: 2026-02-23 09:45:58.313457794 +0000 UTC m=+0.162972489 container died 0b8cf05f051bd7284d1c980439db91121ec250572907a5051eb64a73a93e34bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_germain, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z) Feb 23 04:45:58 localhost podman[290320]: 2026-02-23 09:45:58.409994944 +0000 UTC m=+0.087538685 container remove 0b8cf05f051bd7284d1c980439db91121ec250572907a5051eb64a73a93e34bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_germain, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, version=7, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, release=1770267347, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=) Feb 23 04:45:58 localhost systemd[1]: libpod-conmon-0b8cf05f051bd7284d1c980439db91121ec250572907a5051eb64a73a93e34bc.scope: Deactivated successfully. Feb 23 04:45:58 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:45:58 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:45:58 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Feb 23 04:45:58 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:45:58 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:45:58 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:45:58 localhost ceph-mon[287329]: Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:45:58 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:45:58 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:58 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:45:58 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:45:59 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:45:59 localhost podman[290389]: Feb 23 04:45:59 localhost podman[290389]: 2026-02-23 09:45:59.131539195 +0000 UTC m=+0.094983823 container create f15b4d8a590c58696c1d633fa1c27f210337bcd81c47d81075fbd9fb3508db13 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hermann, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph) Feb 23 04:45:59 localhost systemd[1]: Started libpod-conmon-f15b4d8a590c58696c1d633fa1c27f210337bcd81c47d81075fbd9fb3508db13.scope. Feb 23 04:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:45:59 localhost systemd[1]: Started libcrun container. Feb 23 04:45:59 localhost podman[290389]: 2026-02-23 09:45:59.081283239 +0000 UTC m=+0.044727927 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:45:59 localhost podman[290389]: 2026-02-23 09:45:59.187719331 +0000 UTC m=+0.151163989 container init f15b4d8a590c58696c1d633fa1c27f210337bcd81c47d81075fbd9fb3508db13 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hermann, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_BRANCH=main, ceph=True, io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, release=1770267347) Feb 23 04:45:59 localhost podman[290389]: 2026-02-23 09:45:59.197402197 +0000 UTC m=+0.160846845 container start f15b4d8a590c58696c1d633fa1c27f210337bcd81c47d81075fbd9fb3508db13 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hermann, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:45:59 localhost interesting_hermann[290404]: 167 167 Feb 23 04:45:59 localhost podman[290389]: 2026-02-23 09:45:59.197714477 +0000 UTC m=+0.161159165 container attach f15b4d8a590c58696c1d633fa1c27f210337bcd81c47d81075fbd9fb3508db13 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hermann, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, version=7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, maintainer=Guillaume Abrioux ) Feb 23 04:45:59 localhost systemd[1]: libpod-f15b4d8a590c58696c1d633fa1c27f210337bcd81c47d81075fbd9fb3508db13.scope: Deactivated successfully. Feb 23 04:45:59 localhost podman[290389]: 2026-02-23 09:45:59.201502432 +0000 UTC m=+0.164947110 container died f15b4d8a590c58696c1d633fa1c27f210337bcd81c47d81075fbd9fb3508db13 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hermann, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, vendor=Red Hat, Inc., name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:45:59 localhost systemd[1]: var-lib-containers-storage-overlay-1f24622da0e20918bf8912d40da0445b595d3a3347fa8b7798a39fe7beeb427c-merged.mount: Deactivated successfully. Feb 23 04:45:59 localhost podman[290407]: 2026-02-23 09:45:59.274806981 +0000 UTC m=+0.095694334 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 04:45:59 localhost podman[290407]: 2026-02-23 09:45:59.313775852 +0000 UTC m=+0.134663215 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute) Feb 23 04:45:59 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:45:59 localhost podman[290405]: 2026-02-23 09:45:59.332603417 +0000 UTC m=+0.153094198 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:45:59 localhost podman[290429]: 2026-02-23 09:45:59.395505609 +0000 UTC m=+0.185033293 container remove f15b4d8a590c58696c1d633fa1c27f210337bcd81c47d81075fbd9fb3508db13 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hermann, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, release=1770267347, vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container) Feb 23 04:45:59 localhost systemd[1]: libpod-conmon-f15b4d8a590c58696c1d633fa1c27f210337bcd81c47d81075fbd9fb3508db13.scope: Deactivated successfully. Feb 23 04:45:59 localhost podman[290405]: 2026-02-23 09:45:59.417378877 +0000 UTC m=+0.237869668 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:45:59 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:45:59 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:46:00 localhost systemd[1]: var-lib-containers-storage-overlay-c52b8472def875769b3e3e2c8df0fd20bf7b3d401ce30e2b8ac0bf1890c45a1a-merged.mount: Deactivated successfully. Feb 23 04:46:00 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:46:00 localhost ceph-mon[287329]: Reconfiguring osd.0 (monmap changed)... Feb 23 04:46:00 localhost ceph-mon[287329]: Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:46:00 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Feb 23 04:46:00 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:46:00 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:00 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:01 localhost podman[290521]: Feb 23 04:46:01 localhost podman[290521]: 2026-02-23 09:46:01.518857123 +0000 UTC m=+0.077720095 container create 15fd39e1a0cb61fb1e3050a0363d572a298f7950ecbdc76ebd4507d594853e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_jemison, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.42.2, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 04:46:01 localhost systemd[1]: Started libpod-conmon-15fd39e1a0cb61fb1e3050a0363d572a298f7950ecbdc76ebd4507d594853e4d.scope. Feb 23 04:46:01 localhost systemd[1]: Started libcrun container. Feb 23 04:46:01 localhost podman[290521]: 2026-02-23 09:46:01.486218636 +0000 UTC m=+0.045081638 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:46:01 localhost podman[290521]: 2026-02-23 09:46:01.58589494 +0000 UTC m=+0.144757912 container init 15fd39e1a0cb61fb1e3050a0363d572a298f7950ecbdc76ebd4507d594853e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_jemison, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph) Feb 23 04:46:01 localhost podman[290521]: 2026-02-23 09:46:01.617697483 +0000 UTC m=+0.176560435 container start 15fd39e1a0cb61fb1e3050a0363d572a298f7950ecbdc76ebd4507d594853e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_jemison, release=1770267347, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, ceph=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, io.buildah.version=1.42.2) Feb 23 04:46:01 localhost podman[290521]: 2026-02-23 09:46:01.617920949 +0000 UTC m=+0.176783891 container attach 15fd39e1a0cb61fb1e3050a0363d572a298f7950ecbdc76ebd4507d594853e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_jemison, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , release=1770267347, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph) Feb 23 04:46:01 localhost practical_jemison[290536]: 167 167 Feb 23 04:46:01 localhost systemd[1]: libpod-15fd39e1a0cb61fb1e3050a0363d572a298f7950ecbdc76ebd4507d594853e4d.scope: Deactivated successfully. Feb 23 04:46:01 localhost podman[290521]: 2026-02-23 09:46:01.62351567 +0000 UTC m=+0.182378642 container died 15fd39e1a0cb61fb1e3050a0363d572a298f7950ecbdc76ebd4507d594853e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_jemison, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.42.2, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, release=1770267347, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, RELEASE=main, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z) Feb 23 04:46:01 localhost podman[290541]: 2026-02-23 09:46:01.714629123 +0000 UTC m=+0.082219882 container remove 15fd39e1a0cb61fb1e3050a0363d572a298f7950ecbdc76ebd4507d594853e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_jemison, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=1770267347, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:46:01 localhost systemd[1]: libpod-conmon-15fd39e1a0cb61fb1e3050a0363d572a298f7950ecbdc76ebd4507d594853e4d.scope: Deactivated successfully. Feb 23 04:46:01 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:01 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:01 localhost ceph-mon[287329]: Reconfiguring osd.3 (monmap changed)... Feb 23 04:46:01 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:46:01 localhost ceph-mon[287329]: Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:46:01 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:46:01 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:46:01 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 23 04:46:01 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:01 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:01 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:01 localhost openstack_network_exporter[243519]: ERROR 09:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:46:01 localhost openstack_network_exporter[243519]: Feb 23 04:46:01 localhost openstack_network_exporter[243519]: ERROR 09:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:46:01 localhost openstack_network_exporter[243519]: Feb 23 04:46:02 localhost systemd[1]: tmp-crun.2kLZOK.mount: Deactivated successfully. Feb 23 04:46:02 localhost systemd[1]: var-lib-containers-storage-overlay-7c2eef4c7edf07cfe7c3db89707d20168fe199959af7ee5f0b0d9a69477eb0c0-merged.mount: Deactivated successfully. Feb 23 04:46:02 localhost podman[290619]: Feb 23 04:46:02 localhost podman[290619]: 2026-02-23 09:46:02.589314904 +0000 UTC m=+0.081279394 container create f11e2aec812d8b4d9d9586c62d8e2fd1d4d3620c1031f86dc8fe8149c1eec8f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_bassi, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, RELEASE=main, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:46:02 localhost systemd[1]: Started libpod-conmon-f11e2aec812d8b4d9d9586c62d8e2fd1d4d3620c1031f86dc8fe8149c1eec8f0.scope. Feb 23 04:46:02 localhost podman[290619]: 2026-02-23 09:46:02.556909273 +0000 UTC m=+0.048873753 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:46:02 localhost systemd[1]: Started libcrun container. Feb 23 04:46:02 localhost podman[290619]: 2026-02-23 09:46:02.670349929 +0000 UTC m=+0.162314409 container init f11e2aec812d8b4d9d9586c62d8e2fd1d4d3620c1031f86dc8fe8149c1eec8f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_bassi, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_BRANCH=main, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1770267347, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, CEPH_POINT_RELEASE=) Feb 23 04:46:02 localhost podman[290619]: 2026-02-23 09:46:02.683758748 +0000 UTC m=+0.175723228 container start f11e2aec812d8b4d9d9586c62d8e2fd1d4d3620c1031f86dc8fe8149c1eec8f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_bassi, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7) Feb 23 04:46:02 localhost podman[290619]: 2026-02-23 09:46:02.684215563 +0000 UTC m=+0.176180093 container attach f11e2aec812d8b4d9d9586c62d8e2fd1d4d3620c1031f86dc8fe8149c1eec8f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_bassi, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph) Feb 23 04:46:02 localhost zealous_bassi[290635]: 167 167 Feb 23 04:46:02 localhost systemd[1]: libpod-f11e2aec812d8b4d9d9586c62d8e2fd1d4d3620c1031f86dc8fe8149c1eec8f0.scope: Deactivated successfully. Feb 23 04:46:02 localhost podman[290619]: 2026-02-23 09:46:02.688165363 +0000 UTC m=+0.180129873 container died f11e2aec812d8b4d9d9586c62d8e2fd1d4d3620c1031f86dc8fe8149c1eec8f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_bassi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph) Feb 23 04:46:02 localhost podman[290640]: 2026-02-23 09:46:02.788717045 +0000 UTC m=+0.091362373 container remove f11e2aec812d8b4d9d9586c62d8e2fd1d4d3620c1031f86dc8fe8149c1eec8f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_bassi, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, release=1770267347, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.tags=rhceph ceph, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph) Feb 23 04:46:02 localhost systemd[1]: libpod-conmon-f11e2aec812d8b4d9d9586c62d8e2fd1d4d3620c1031f86dc8fe8149c1eec8f0.scope: Deactivated successfully. Feb 23 04:46:02 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:46:02 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:46:02 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:46:02 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:02 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:46:02 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:46:02 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:02 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:02 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:46:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:02 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:46:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:02 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:02 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:46:03 localhost podman[290693]: 2026-02-23 09:46:03.214071779 +0000 UTC m=+0.093090716 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:46:03 localhost podman[290693]: 2026-02-23 09:46:03.227831779 +0000 UTC m=+0.106850746 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:46:03 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:46:03 localhost systemd[1]: var-lib-containers-storage-overlay-dddf4cf437870e9e73747409c6aba4a95fd3cc4a57ef96230d1cb82a9f997ce9-merged.mount: Deactivated successfully. Feb 23 04:46:03 localhost podman[290735]: Feb 23 04:46:03 localhost podman[290735]: 2026-02-23 09:46:03.568487195 +0000 UTC m=+0.077591762 container create 040b87c3b3d16bc5af0865d33711c504388c2a66031cfb1f78d51680f86b2b33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hypatia, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True, release=1770267347, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, version=7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2) Feb 23 04:46:03 localhost systemd[1]: Started libpod-conmon-040b87c3b3d16bc5af0865d33711c504388c2a66031cfb1f78d51680f86b2b33.scope. Feb 23 04:46:03 localhost systemd[1]: Started libcrun container. Feb 23 04:46:03 localhost podman[290735]: 2026-02-23 09:46:03.539254432 +0000 UTC m=+0.048358939 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:46:03 localhost podman[290735]: 2026-02-23 09:46:03.643368182 +0000 UTC m=+0.152472689 container init 040b87c3b3d16bc5af0865d33711c504388c2a66031cfb1f78d51680f86b2b33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hypatia, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:46:03 localhost podman[290735]: 2026-02-23 09:46:03.653280995 +0000 UTC m=+0.162385502 container start 040b87c3b3d16bc5af0865d33711c504388c2a66031cfb1f78d51680f86b2b33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hypatia, release=1770267347, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, io.openshift.expose-services=, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z) Feb 23 04:46:03 localhost podman[290735]: 2026-02-23 09:46:03.653552234 +0000 UTC m=+0.162656751 container attach 040b87c3b3d16bc5af0865d33711c504388c2a66031cfb1f78d51680f86b2b33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hypatia, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Feb 23 04:46:03 localhost goofy_hypatia[290751]: 167 167 Feb 23 04:46:03 localhost systemd[1]: libpod-040b87c3b3d16bc5af0865d33711c504388c2a66031cfb1f78d51680f86b2b33.scope: Deactivated successfully. Feb 23 04:46:03 localhost podman[290735]: 2026-02-23 09:46:03.657875115 +0000 UTC m=+0.166979642 container died 040b87c3b3d16bc5af0865d33711c504388c2a66031cfb1f78d51680f86b2b33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hypatia, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=1770267347, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, version=7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:46:03 localhost podman[290756]: 2026-02-23 09:46:03.75556553 +0000 UTC m=+0.082736109 container remove 040b87c3b3d16bc5af0865d33711c504388c2a66031cfb1f78d51680f86b2b33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hypatia, version=7, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=) Feb 23 04:46:03 localhost systemd[1]: libpod-conmon-040b87c3b3d16bc5af0865d33711c504388c2a66031cfb1f78d51680f86b2b33.scope: Deactivated successfully. Feb 23 04:46:03 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:46:03 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:46:03 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:46:03 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:03 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:03 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:04 localhost systemd[1]: var-lib-containers-storage-overlay-30eb8bb61564bd14b09075fa8d613d2c00fbc38e1db3a5a6ab021e581d3f0362-merged.mount: Deactivated successfully. Feb 23 04:46:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Feb 23 04:46:04 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:46:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:04 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:04 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:46:04 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:46:04 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:04 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:04 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:04 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:04 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:04 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:05 localhost ceph-mon[287329]: Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:46:05 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:46:05 localhost ceph-mon[287329]: Reconfiguring osd.1 (monmap changed)... Feb 23 04:46:05 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:46:05 localhost ceph-mon[287329]: Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:46:05 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:05 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:05 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Feb 23 04:46:05 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:46:05 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:05 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:06 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:06 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:06 localhost ceph-mon[287329]: Reconfiguring osd.4 (monmap changed)... Feb 23 04:46:06 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:46:06 localhost ceph-mon[287329]: Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:46:06 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:07 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:07 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 23 04:46:07 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:07 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:07 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:07 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:07 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:07 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:46:07 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:07 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:46:07 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:46:07 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:07 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:08 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:08 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:08 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:46:08 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:08 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:08 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:46:08 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:08 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:08 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:08 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:08 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 23 04:46:08 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:46:08 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:46:08 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:08 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:08 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:08 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:09 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:09 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:46:09 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:46:09 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:09 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:46:09 localhost ceph-mon[287329]: Deploying daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:46:09 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:09 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:10 localhost nova_compute[280321]: 2026-02-23 09:46:10.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:46:10 localhost nova_compute[280321]: 2026-02-23 09:46:10.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e8 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:46:11 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:46:11 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x55efa5ade160 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(probing) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626460"} v 0) Feb 23 04:46:11 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626460"} : dispatch Feb 23 04:46:11 localhost ceph-mon[287329]: log_channel(cluster) log [INF] : mon.np0005626465 calling monitor election Feb 23 04:46:11 localhost ceph-mon[287329]: paxos.3).electionLogic(38) init, last seen epoch 38 Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626461"} v 0) Feb 23 04:46:11 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626461"} : dispatch Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:46:11 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626465"} v 0) Feb 23 04:46:11 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626465"} : dispatch Feb 23 04:46:11 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626466"} v 0) Feb 23 04:46:11 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626466"} : dispatch Feb 23 04:46:11 localhost nova_compute[280321]: 2026-02-23 09:46:11.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:46:11 localhost nova_compute[280321]: 2026-02-23 09:46:11.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:46:11 localhost nova_compute[280321]: 2026-02-23 09:46:11.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:46:11 localhost nova_compute[280321]: 2026-02-23 09:46:11.919 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:46:11 localhost nova_compute[280321]: 2026-02-23 09:46:11.920 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:46:11 localhost nova_compute[280321]: 2026-02-23 09:46:11.938 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:46:11 localhost nova_compute[280321]: 2026-02-23 09:46:11.939 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:46:11 localhost nova_compute[280321]: 2026-02-23 09:46:11.939 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:46:11 localhost nova_compute[280321]: 2026-02-23 09:46:11.939 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:46:11 localhost nova_compute[280321]: 2026-02-23 09:46:11.940 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:46:12 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:12 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:12 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:46:12 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:46:12 localhost podman[241086]: time="2026-02-23T09:46:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:46:12 localhost podman[241086]: @ - - [23/Feb/2026:09:46:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:46:12 localhost podman[241086]: @ - - [23/Feb/2026:09:46:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17776 "" "Go-http-client/1.1" Feb 23 04:46:13 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:46:13 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:46:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:46:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:46:14 localhost podman[290852]: 2026-02-23 09:46:14.018645887 +0000 UTC m=+0.088850746 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:46:14 localhost podman[290852]: 2026-02-23 09:46:14.057984279 +0000 UTC m=+0.128189168 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:46:14 localhost podman[290853]: 2026-02-23 09:46:14.068219961 +0000 UTC m=+0.137046358 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, architecture=x86_64, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:46:14 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:46:14 localhost podman[290853]: 2026-02-23 09:46:14.083007632 +0000 UTC m=+0.151834009 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1770267347, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64) Feb 23 04:46:14 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:46:14 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:46:14 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:46:15 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:46:15 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:46:16 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:46:16 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:46:16 localhost ceph-mon[287329]: mon.np0005626465@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:46:16 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:46:16 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:46:16 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:46:16 localhost ceph-mon[287329]: mon.np0005626461 calling monitor election Feb 23 04:46:16 localhost ceph-mon[287329]: mon.np0005626466 calling monitor election Feb 23 04:46:16 localhost ceph-mon[287329]: mon.np0005626460 calling monitor election Feb 23 04:46:16 localhost ceph-mon[287329]: mon.np0005626465 calling monitor election Feb 23 04:46:16 localhost ceph-mon[287329]: mon.np0005626463 calling monitor election Feb 23 04:46:16 localhost ceph-mon[287329]: mon.np0005626461 is new leader, mons np0005626461,np0005626460,np0005626466,np0005626465,np0005626463 in quorum (ranks 0,1,2,3,4) Feb 23 04:46:16 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:46:16 localhost ceph-mon[287329]: overall HEALTH_OK Feb 23 04:46:17 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:46:17 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:46:17 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:17 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:17 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:17 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:17 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:17 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:46:17 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:46:17 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:46:17 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:46:17 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:17 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:17 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:46:17 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:46:17 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:18 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:18 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:46:18 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:46:18 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:46:18 localhost nova_compute[280321]: 2026-02-23 09:46:18.414 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:46:18 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:46:18 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:18 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:18 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:18 localhost nova_compute[280321]: 2026-02-23 09:46:18.631 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:46:18 localhost nova_compute[280321]: 2026-02-23 09:46:18.633 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12414MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:46:18 localhost nova_compute[280321]: 2026-02-23 09:46:18.633 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:46:18 localhost nova_compute[280321]: 2026-02-23 09:46:18.634 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:46:18 localhost nova_compute[280321]: 2026-02-23 09:46:18.702 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:46:18 localhost nova_compute[280321]: 2026-02-23 09:46:18.703 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:46:18 localhost nova_compute[280321]: 2026-02-23 09:46:18.729 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:46:18 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:18 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:18 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:18 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:18 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:18 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:19 localhost nova_compute[280321]: 2026-02-23 09:46:19.168 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:46:19 localhost nova_compute[280321]: 2026-02-23 09:46:19.175 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:46:19 localhost nova_compute[280321]: 2026-02-23 09:46:19.189 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:46:19 localhost nova_compute[280321]: 2026-02-23 09:46:19.191 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:46:19 localhost nova_compute[280321]: 2026-02-23 09:46:19.192 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:46:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:46:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:46:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:46:19 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:46:19 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:46:19 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:19 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:19 localhost ceph-mon[287329]: Reconfiguring crash.np0005626460 (monmap changed)... Feb 23 04:46:19 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626460 on np0005626460.localdomain Feb 23 04:46:19 localhost ceph-mon[287329]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON) Feb 23 04:46:19 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:19 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:19 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:19 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:20 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:46:20 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:46:20 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:46:20 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:20 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:46:20 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:46:20 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:20 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:20 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626460.fyrady (monmap changed)... Feb 23 04:46:20 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626460.fyrady on np0005626460.localdomain Feb 23 04:46:20 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:20 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:20 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626461.lrfquh (monmap changed)... Feb 23 04:46:20 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:20 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:20 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626461.lrfquh on np0005626461.localdomain Feb 23 04:46:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:46:21 localhost nova_compute[280321]: 2026-02-23 09:46:21.165 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:46:21 localhost nova_compute[280321]: 2026-02-23 09:46:21.166 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:46:21 localhost nova_compute[280321]: 2026-02-23 09:46:21.166 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:46:21 localhost nova_compute[280321]: 2026-02-23 09:46:21.167 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:46:21 localhost nova_compute[280321]: 2026-02-23 09:46:21.167 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:46:21 localhost nova_compute[280321]: 2026-02-23 09:46:21.167 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:46:21 localhost podman[291265]: 2026-02-23 09:46:21.222359285 +0000 UTC m=+0.083825542 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216) Feb 23 04:46:21 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:46:21 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:46:21 localhost podman[291265]: 2026-02-23 09:46:21.29585538 +0000 UTC m=+0.157321687 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216) Feb 23 04:46:21 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:46:21 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:21 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:21 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:21 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:46:22 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:46:22 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:46:22 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:22 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:22 localhost ceph-mon[287329]: Reconfiguring crash.np0005626461 (monmap changed)... Feb 23 04:46:22 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:22 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:22 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626461 on np0005626461.localdomain Feb 23 04:46:22 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:22 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:46:22 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:22 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:22 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:22 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:23 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:23 localhost ceph-mon[287329]: Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:46:23 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:23 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:23 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:46:23 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:23 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:23 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:23 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:23 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Feb 23 04:46:24 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:46:24 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:24 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:24 localhost ceph-mon[287329]: Reconfiguring osd.2 (monmap changed)... Feb 23 04:46:24 localhost ceph-mon[287329]: Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:46:24 localhost ceph-mon[287329]: Reconfig service osd.default_drive_group Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:24 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 23 04:46:25 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:25 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e82 e82: 6 total, 6 up, 6 in Feb 23 04:46:25 localhost systemd[1]: session-63.scope: Deactivated successfully. Feb 23 04:46:25 localhost systemd[1]: session-63.scope: Consumed 17.666s CPU time. Feb 23 04:46:25 localhost systemd-logind[759]: Session 63 logged out. Waiting for processes to exit. Feb 23 04:46:25 localhost systemd-logind[759]: Removed session 63. Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.665398) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839985665476, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2889, "num_deletes": 254, "total_data_size": 6133367, "memory_usage": 6196984, "flush_reason": "Manual Compaction"} Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839985688086, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3505687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11070, "largest_seqno": 13954, "table_properties": {"data_size": 3493763, "index_size": 7405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 31588, "raw_average_key_size": 22, "raw_value_size": 3467226, "raw_average_value_size": 2521, "num_data_blocks": 322, "num_entries": 1375, "num_filter_entries": 1375, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771839911, "oldest_key_time": 1771839911, "file_creation_time": 1771839985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 22761 microseconds, and 9561 cpu microseconds. Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.688159) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3505687 bytes OK Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.688191) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.690647) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.690674) EVENT_LOG_v1 {"time_micros": 1771839985690662, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.690705) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 6119370, prev total WAL file size 6119700, number of live WAL files 2. Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.691980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3423KB)], [15(10MB)] Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839985692036, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14261777, "oldest_snapshot_seqno": -1} Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 10002 keys, 13023155 bytes, temperature: kUnknown Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839985758824, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 13023155, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12965188, "index_size": 31882, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25029, "raw_key_size": 266575, "raw_average_key_size": 26, "raw_value_size": 12792972, "raw_average_value_size": 1279, "num_data_blocks": 1225, "num_entries": 10002, "num_filter_entries": 10002, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771839874, "oldest_key_time": 0, "file_creation_time": 1771839985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.759353) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13023155 bytes Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.764533) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.9 rd, 194.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(7.8) write-amplify(3.7) OK, records in: 10555, records dropped: 553 output_compression: NoCompression Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.764570) EVENT_LOG_v1 {"time_micros": 1771839985764554, "job": 6, "event": "compaction_finished", "compaction_time_micros": 66985, "compaction_time_cpu_micros": 32142, "output_level": 6, "num_output_files": 1, "total_output_size": 13023155, "num_input_records": 10555, "num_output_records": 10002, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839985765457, "job": 6, "event": "table_file_deletion", "file_number": 17} Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771839985767292, "job": 6, "event": "table_file_deletion", "file_number": 15} Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.691918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.767413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.767421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.767423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.767424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:25 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:25.767442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:25 localhost ceph-mon[287329]: Reconfiguring osd.5 (monmap changed)... Feb 23 04:46:25 localhost ceph-mon[287329]: Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.14193 172.18.0.105:0/3093743034' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.14193 ' entity='mgr.np0005626461.lrfquh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: from='client.? 172.18.0.200:0/2634313896' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: Activating manager daemon np0005626460.fyrady Feb 23 04:46:25 localhost ceph-mon[287329]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 23 04:46:25 localhost ceph-mon[287329]: Manager daemon np0005626460.fyrady is now available Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain.devices.0"} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain.devices.0"} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain.devices.0"}]': finished Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain.devices.0"} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain.devices.0"} : dispatch Feb 23 04:46:25 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005626459.localdomain.devices.0"}]': finished Feb 23 04:46:25 localhost sshd[291290]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:46:26 localhost systemd-logind[759]: New session 64 of user ceph-admin. Feb 23 04:46:26 localhost systemd[1]: Started Session 64 of User ceph-admin. Feb 23 04:46:26 localhost ceph-mon[287329]: removing stray HostCache host record np0005626459.localdomain.devices.0 Feb 23 04:46:26 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626460.fyrady/mirror_snapshot_schedule"} : dispatch Feb 23 04:46:26 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626460.fyrady/mirror_snapshot_schedule"} : dispatch Feb 23 04:46:26 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626460.fyrady/trash_purge_schedule"} : dispatch Feb 23 04:46:26 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626460.fyrady/trash_purge_schedule"} : dispatch Feb 23 04:46:27 localhost podman[291402]: 2026-02-23 09:46:27.121305616 +0000 UTC m=+0.077678374 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2026-02-09T10:25:24Z, name=rhceph, architecture=x86_64, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container) Feb 23 04:46:27 localhost podman[291402]: 2026-02-23 09:46:27.237957869 +0000 UTC m=+0.194330627 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, release=1770267347) Feb 23 04:46:27 localhost ceph-mon[287329]: [23/Feb/2026:09:46:26] ENGINE Bus STARTING Feb 23 04:46:27 localhost ceph-mon[287329]: [23/Feb/2026:09:46:26] ENGINE Serving on http://172.18.0.104:8765 Feb 23 04:46:27 localhost ceph-mon[287329]: [23/Feb/2026:09:46:27] ENGINE Serving on https://172.18.0.104:7150 Feb 23 04:46:27 localhost ceph-mon[287329]: [23/Feb/2026:09:46:27] ENGINE Bus STARTED Feb 23 04:46:27 localhost ceph-mon[287329]: [23/Feb/2026:09:46:27] ENGINE Client ('172.18.0.104', 55702) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 23 04:46:27 localhost ceph-mon[287329]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s)) Feb 23 04:46:27 localhost ceph-mon[287329]: Cluster is now healthy Feb 23 04:46:27 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:27 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:27 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:27 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:28 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:28 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:28 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:28 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:28 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:28 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:29 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:46:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:46:29 localhost systemd[1]: tmp-crun.kwHJLO.mount: Deactivated successfully. Feb 23 04:46:29 localhost podman[291661]: 2026-02-23 09:46:29.698394921 +0000 UTC m=+0.104157653 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 23 04:46:29 localhost podman[291661]: 2026-02-23 09:46:29.738146005 +0000 UTC m=+0.143908737 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:46:29 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:46:29 localhost podman[291660]: 2026-02-23 09:46:29.757502387 +0000 UTC m=+0.163944990 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260216, tcib_managed=true, config_id=ovn_metadata_agent) Feb 23 04:46:29 localhost podman[291660]: 2026-02-23 09:46:29.761960292 +0000 UTC m=+0.168402945 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:46:29 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd/host:np0005626460", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd/host:np0005626460", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 04:46:30 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:30 localhost ceph-mon[287329]: Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 04:46:30 localhost ceph-mon[287329]: Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:46:30 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:46:30 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:30 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:30 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:30 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:30 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:31 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:31 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:31 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:31 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:31 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:31 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:31 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:31 localhost openstack_network_exporter[243519]: ERROR 09:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:46:31 localhost openstack_network_exporter[243519]: Feb 23 04:46:31 localhost openstack_network_exporter[243519]: ERROR 09:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:46:31 localhost openstack_network_exporter[243519]: Feb 23 04:46:33 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:33 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:33 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:33 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:33 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:33 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:33 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:33 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:46:34 localhost systemd[1]: tmp-crun.YvUXCD.mount: Deactivated successfully. Feb 23 04:46:34 localhost podman[292317]: 2026-02-23 09:46:34.010903279 +0000 UTC m=+0.079230831 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:46:34 localhost podman[292317]: 2026-02-23 09:46:34.024916837 +0000 UTC m=+0.093244379 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:46:34 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:46:34 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:34 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:35 localhost ceph-mon[287329]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 23 04:46:35 localhost ceph-mon[287329]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 23 04:46:35 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:46:35 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:35 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:35 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:46:36 localhost sshd[292359]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:46:36 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:36 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:36 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:46:36 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:36 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:36 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:46:36 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:36 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:36 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:36 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:36 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:37 localhost podman[292413]: Feb 23 04:46:37 localhost podman[292413]: 2026-02-23 09:46:37.172426987 +0000 UTC m=+0.073279380 container create b077d302f8233d7020e1ed5f93483e12c127e55490f60aeef9ff5b2f425419b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_shaw, architecture=x86_64, ceph=True, release=1770267347, GIT_CLEAN=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, name=rhceph, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 04:46:37 localhost systemd[1]: Started libpod-conmon-b077d302f8233d7020e1ed5f93483e12c127e55490f60aeef9ff5b2f425419b5.scope. Feb 23 04:46:37 localhost systemd[1]: Started libcrun container. Feb 23 04:46:37 localhost podman[292413]: 2026-02-23 09:46:37.140730418 +0000 UTC m=+0.041582841 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:46:37 localhost podman[292413]: 2026-02-23 09:46:37.253137502 +0000 UTC m=+0.153989885 container init b077d302f8233d7020e1ed5f93483e12c127e55490f60aeef9ff5b2f425419b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_shaw, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, version=7, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=) Feb 23 04:46:37 localhost podman[292413]: 2026-02-23 09:46:37.263914141 +0000 UTC m=+0.164766524 container start b077d302f8233d7020e1ed5f93483e12c127e55490f60aeef9ff5b2f425419b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_shaw, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:46:37 localhost podman[292413]: 2026-02-23 09:46:37.264157168 +0000 UTC m=+0.165009551 container attach b077d302f8233d7020e1ed5f93483e12c127e55490f60aeef9ff5b2f425419b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_shaw, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, name=rhceph, architecture=x86_64, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:46:37 localhost sad_shaw[292428]: 167 167 Feb 23 04:46:37 localhost systemd[1]: libpod-b077d302f8233d7020e1ed5f93483e12c127e55490f60aeef9ff5b2f425419b5.scope: Deactivated successfully. Feb 23 04:46:37 localhost podman[292413]: 2026-02-23 09:46:37.269259795 +0000 UTC m=+0.170112188 container died b077d302f8233d7020e1ed5f93483e12c127e55490f60aeef9ff5b2f425419b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_shaw, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:46:37 localhost podman[292433]: 2026-02-23 09:46:37.344841093 +0000 UTC m=+0.070674039 container remove b077d302f8233d7020e1ed5f93483e12c127e55490f60aeef9ff5b2f425419b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_shaw, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, vcs-type=git, description=Red Hat Ceph Storage 7, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main) Feb 23 04:46:37 localhost systemd[1]: libpod-conmon-b077d302f8233d7020e1ed5f93483e12c127e55490f60aeef9ff5b2f425419b5.scope: Deactivated successfully. Feb 23 04:46:37 localhost ceph-mon[287329]: Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:46:37 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:46:37 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:37 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:37 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:46:38 localhost podman[292503]: Feb 23 04:46:38 localhost podman[292503]: 2026-02-23 09:46:38.014011535 +0000 UTC m=+0.056033562 container create d45611a2cbc69fc60bff25197b6f3e1dc7a4c3816e581018235b61ac79952a98 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_yonath, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, ceph=True) Feb 23 04:46:38 localhost systemd[1]: Started libpod-conmon-d45611a2cbc69fc60bff25197b6f3e1dc7a4c3816e581018235b61ac79952a98.scope. Feb 23 04:46:38 localhost systemd[1]: Started libcrun container. Feb 23 04:46:38 localhost podman[292503]: 2026-02-23 09:46:38.072775831 +0000 UTC m=+0.114797858 container init d45611a2cbc69fc60bff25197b6f3e1dc7a4c3816e581018235b61ac79952a98 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_yonath, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7) Feb 23 04:46:38 localhost wizardly_yonath[292519]: 167 167 Feb 23 04:46:38 localhost podman[292503]: 2026-02-23 09:46:38.082662593 +0000 UTC m=+0.124684610 container start d45611a2cbc69fc60bff25197b6f3e1dc7a4c3816e581018235b61ac79952a98 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_yonath, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, release=1770267347, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7) Feb 23 04:46:38 localhost podman[292503]: 2026-02-23 09:46:38.082965842 +0000 UTC m=+0.124987859 container attach d45611a2cbc69fc60bff25197b6f3e1dc7a4c3816e581018235b61ac79952a98 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_yonath, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, ceph=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2) Feb 23 04:46:38 localhost systemd[1]: libpod-d45611a2cbc69fc60bff25197b6f3e1dc7a4c3816e581018235b61ac79952a98.scope: Deactivated successfully. Feb 23 04:46:38 localhost podman[292503]: 2026-02-23 09:46:38.085275332 +0000 UTC m=+0.127297419 container died d45611a2cbc69fc60bff25197b6f3e1dc7a4c3816e581018235b61ac79952a98 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_yonath, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph) Feb 23 04:46:38 localhost podman[292503]: 2026-02-23 09:46:37.98929034 +0000 UTC m=+0.031312347 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:46:38 localhost podman[292524]: 2026-02-23 09:46:38.173953721 +0000 UTC m=+0.081199701 container remove d45611a2cbc69fc60bff25197b6f3e1dc7a4c3816e581018235b61ac79952a98 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_yonath, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, release=1770267347, version=7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=) Feb 23 04:46:38 localhost systemd[1]: var-lib-containers-storage-overlay-ed76e76e39d53dd2c522a1c2b56399cfe5dd793b88b3cb807228c0b7b8ec57c5-merged.mount: Deactivated successfully. Feb 23 04:46:38 localhost systemd[1]: libpod-conmon-d45611a2cbc69fc60bff25197b6f3e1dc7a4c3816e581018235b61ac79952a98.scope: Deactivated successfully. Feb 23 04:46:38 localhost ceph-mon[287329]: Reconfiguring osd.0 (monmap changed)... Feb 23 04:46:38 localhost ceph-mon[287329]: Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:46:38 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:38 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:38 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:38 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:38 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:46:39 localhost podman[292600]: Feb 23 04:46:39 localhost podman[292600]: 2026-02-23 09:46:39.020374358 +0000 UTC m=+0.071388762 container create 629400b1d262b6db464d916772f66119e5442935d08107f7ba295c2f4c32af2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_ramanujan, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, ceph=True, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:46:39 localhost systemd[1]: Started libpod-conmon-629400b1d262b6db464d916772f66119e5442935d08107f7ba295c2f4c32af2e.scope. Feb 23 04:46:39 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:39 localhost podman[292600]: 2026-02-23 09:46:38.990543037 +0000 UTC m=+0.041557481 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:46:39 localhost systemd[1]: Started libcrun container. Feb 23 04:46:39 localhost podman[292600]: 2026-02-23 09:46:39.110228492 +0000 UTC m=+0.161242926 container init 629400b1d262b6db464d916772f66119e5442935d08107f7ba295c2f4c32af2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_ramanujan, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, version=7, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , distribution-scope=public) Feb 23 04:46:39 localhost podman[292600]: 2026-02-23 09:46:39.121354792 +0000 UTC m=+0.172369256 container start 629400b1d262b6db464d916772f66119e5442935d08107f7ba295c2f4c32af2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_ramanujan, version=7, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, release=1770267347) Feb 23 04:46:39 localhost podman[292600]: 2026-02-23 09:46:39.121860147 +0000 UTC m=+0.172874591 container attach 629400b1d262b6db464d916772f66119e5442935d08107f7ba295c2f4c32af2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_ramanujan, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1770267347, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 04:46:39 localhost boring_ramanujan[292615]: 167 167 Feb 23 04:46:39 localhost systemd[1]: libpod-629400b1d262b6db464d916772f66119e5442935d08107f7ba295c2f4c32af2e.scope: Deactivated successfully. Feb 23 04:46:39 localhost podman[292600]: 2026-02-23 09:46:39.124713985 +0000 UTC m=+0.175728509 container died 629400b1d262b6db464d916772f66119e5442935d08107f7ba295c2f4c32af2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_ramanujan, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , RELEASE=main, build-date=2026-02-09T10:25:24Z, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.buildah.version=1.42.2) Feb 23 04:46:39 localhost systemd[1]: var-lib-containers-storage-overlay-02ce6eb27b4bc08c6c1d713c12aefd0ba1c9544e4467b4d6b4a03cabf3295f98-merged.mount: Deactivated successfully. Feb 23 04:46:39 localhost podman[292620]: 2026-02-23 09:46:39.21783555 +0000 UTC m=+0.081144380 container remove 629400b1d262b6db464d916772f66119e5442935d08107f7ba295c2f4c32af2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_ramanujan, build-date=2026-02-09T10:25:24Z, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, vcs-type=git, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_BRANCH=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Feb 23 04:46:39 localhost systemd[1]: libpod-conmon-629400b1d262b6db464d916772f66119e5442935d08107f7ba295c2f4c32af2e.scope: Deactivated successfully. Feb 23 04:46:39 localhost ceph-mon[287329]: Reconfiguring osd.3 (monmap changed)... Feb 23 04:46:39 localhost ceph-mon[287329]: Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:46:39 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:39 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:39 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:39 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:39 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:39 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:39 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:40 localhost podman[292698]: Feb 23 04:46:40 localhost podman[292698]: 2026-02-23 09:46:40.08534664 +0000 UTC m=+0.062320144 container create 0098b25e3d19f299402e8f21856d2287954058c1bcac2d71a0cd88a20aa41137 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_dubinsky, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.buildah.version=1.42.2, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_CLEAN=True, release=1770267347, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:46:40 localhost systemd[1]: Started libpod-conmon-0098b25e3d19f299402e8f21856d2287954058c1bcac2d71a0cd88a20aa41137.scope. Feb 23 04:46:40 localhost systemd[1]: Started libcrun container. Feb 23 04:46:40 localhost podman[292698]: 2026-02-23 09:46:40.148348685 +0000 UTC m=+0.125322199 container init 0098b25e3d19f299402e8f21856d2287954058c1bcac2d71a0cd88a20aa41137 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_dubinsky, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:46:40 localhost podman[292698]: 2026-02-23 09:46:40.055616393 +0000 UTC m=+0.032589937 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:46:40 localhost podman[292698]: 2026-02-23 09:46:40.156738172 +0000 UTC m=+0.133711666 container start 0098b25e3d19f299402e8f21856d2287954058c1bcac2d71a0cd88a20aa41137 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_dubinsky, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7) Feb 23 04:46:40 localhost podman[292698]: 2026-02-23 09:46:40.156977269 +0000 UTC m=+0.133950803 container attach 0098b25e3d19f299402e8f21856d2287954058c1bcac2d71a0cd88a20aa41137 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_dubinsky, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, release=1770267347, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:46:40 localhost eager_dubinsky[292713]: 167 167 Feb 23 04:46:40 localhost systemd[1]: libpod-0098b25e3d19f299402e8f21856d2287954058c1bcac2d71a0cd88a20aa41137.scope: Deactivated successfully. Feb 23 04:46:40 localhost podman[292698]: 2026-02-23 09:46:40.159285199 +0000 UTC m=+0.136258763 container died 0098b25e3d19f299402e8f21856d2287954058c1bcac2d71a0cd88a20aa41137 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_dubinsky, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, GIT_BRANCH=main) Feb 23 04:46:40 localhost systemd[1]: var-lib-containers-storage-overlay-f7506de73f293373715bc7ca212f68ea0da06da75ace0df730f23096730a4c7e-merged.mount: Deactivated successfully. Feb 23 04:46:40 localhost podman[292718]: 2026-02-23 09:46:40.23919356 +0000 UTC m=+0.072378232 container remove 0098b25e3d19f299402e8f21856d2287954058c1bcac2d71a0cd88a20aa41137 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_dubinsky, GIT_CLEAN=True, RELEASE=main, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64) Feb 23 04:46:40 localhost systemd[1]: libpod-conmon-0098b25e3d19f299402e8f21856d2287954058c1bcac2d71a0cd88a20aa41137.scope: Deactivated successfully. Feb 23 04:46:40 localhost ceph-mon[287329]: Saving service mon spec with placement label:mon Feb 23 04:46:40 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:46:40 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:46:40 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:40 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:40 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:40 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:40 localhost podman[292787]: Feb 23 04:46:40 localhost podman[292787]: 2026-02-23 09:46:40.931624623 +0000 UTC m=+0.074229529 container create 81994e799de67ac3314d4c7aa3ba9e335a94f2f3a3e660b95582043f9b7815a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_chaum, maintainer=Guillaume Abrioux , ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, name=rhceph, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1770267347, description=Red Hat Ceph Storage 7) Feb 23 04:46:40 localhost systemd[1]: Started libpod-conmon-81994e799de67ac3314d4c7aa3ba9e335a94f2f3a3e660b95582043f9b7815a9.scope. Feb 23 04:46:40 localhost systemd[1]: Started libcrun container. Feb 23 04:46:40 localhost podman[292787]: 2026-02-23 09:46:40.986672024 +0000 UTC m=+0.129276930 container init 81994e799de67ac3314d4c7aa3ba9e335a94f2f3a3e660b95582043f9b7815a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_chaum, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 04:46:40 localhost podman[292787]: 2026-02-23 09:46:40.995121612 +0000 UTC m=+0.137726518 container start 81994e799de67ac3314d4c7aa3ba9e335a94f2f3a3e660b95582043f9b7815a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_chaum, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, com.redhat.component=rhceph-container, release=1770267347, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, description=Red Hat Ceph Storage 7) Feb 23 04:46:40 localhost stoic_chaum[292800]: 167 167 Feb 23 04:46:40 localhost podman[292787]: 2026-02-23 09:46:40.995809223 +0000 UTC m=+0.138414169 container attach 81994e799de67ac3314d4c7aa3ba9e335a94f2f3a3e660b95582043f9b7815a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_chaum, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, architecture=x86_64, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Feb 23 04:46:40 localhost systemd[1]: libpod-81994e799de67ac3314d4c7aa3ba9e335a94f2f3a3e660b95582043f9b7815a9.scope: Deactivated successfully. Feb 23 04:46:40 localhost podman[292787]: 2026-02-23 09:46:40.998031131 +0000 UTC m=+0.140636067 container died 81994e799de67ac3314d4c7aa3ba9e335a94f2f3a3e660b95582043f9b7815a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_chaum, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main) Feb 23 04:46:41 localhost podman[292787]: 2026-02-23 09:46:40.902346028 +0000 UTC m=+0.044950964 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:46:41 localhost podman[292805]: 2026-02-23 09:46:41.08701637 +0000 UTC m=+0.080652885 container remove 81994e799de67ac3314d4c7aa3ba9e335a94f2f3a3e660b95582043f9b7815a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_chaum, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph) Feb 23 04:46:41 localhost systemd[1]: libpod-conmon-81994e799de67ac3314d4c7aa3ba9e335a94f2f3a3e660b95582043f9b7815a9.scope: Deactivated successfully. Feb 23 04:46:41 localhost systemd[1]: var-lib-containers-storage-overlay-a7bd92f60fe4414dfcef603822ff384a2b81420cd8e4d4e11ce950fe47ff771f-merged.mount: Deactivated successfully. Feb 23 04:46:41 localhost podman[292879]: Feb 23 04:46:41 localhost podman[292879]: 2026-02-23 09:46:41.771777127 +0000 UTC m=+0.077878329 container create b44e7197ea2680b0fb59cc978a0c36e53783c5ee42179c2438d3999c311ac1a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_greider, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, RELEASE=main, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , release=1770267347) Feb 23 04:46:41 localhost systemd[1]: Started libpod-conmon-b44e7197ea2680b0fb59cc978a0c36e53783c5ee42179c2438d3999c311ac1a6.scope. Feb 23 04:46:41 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:46:41 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:46:41 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:41 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:41 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:46:41 localhost systemd[1]: Started libcrun container. Feb 23 04:46:41 localhost podman[292879]: 2026-02-23 09:46:41.837163895 +0000 UTC m=+0.143265097 container init b44e7197ea2680b0fb59cc978a0c36e53783c5ee42179c2438d3999c311ac1a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_greider, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=) Feb 23 04:46:41 localhost podman[292879]: 2026-02-23 09:46:41.739226713 +0000 UTC m=+0.045327915 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:46:41 localhost podman[292879]: 2026-02-23 09:46:41.848941115 +0000 UTC m=+0.155042307 container start b44e7197ea2680b0fb59cc978a0c36e53783c5ee42179c2438d3999c311ac1a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_greider, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, maintainer=Guillaume Abrioux , version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, distribution-scope=public, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:46:41 localhost nice_greider[292894]: 167 167 Feb 23 04:46:41 localhost podman[292879]: 2026-02-23 09:46:41.849252874 +0000 UTC m=+0.155354116 container attach b44e7197ea2680b0fb59cc978a0c36e53783c5ee42179c2438d3999c311ac1a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_greider, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, ceph=True, architecture=x86_64, description=Red Hat Ceph Storage 7) Feb 23 04:46:41 localhost systemd[1]: libpod-b44e7197ea2680b0fb59cc978a0c36e53783c5ee42179c2438d3999c311ac1a6.scope: Deactivated successfully. Feb 23 04:46:41 localhost podman[292879]: 2026-02-23 09:46:41.852481383 +0000 UTC m=+0.158582595 container died b44e7197ea2680b0fb59cc978a0c36e53783c5ee42179c2438d3999c311ac1a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_greider, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=1770267347, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph) Feb 23 04:46:41 localhost podman[292901]: 2026-02-23 09:46:41.948556688 +0000 UTC m=+0.083509672 container remove b44e7197ea2680b0fb59cc978a0c36e53783c5ee42179c2438d3999c311ac1a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_greider, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, version=7, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 04:46:41 localhost systemd[1]: libpod-conmon-b44e7197ea2680b0fb59cc978a0c36e53783c5ee42179c2438d3999c311ac1a6.scope: Deactivated successfully. Feb 23 04:46:42 localhost systemd[1]: var-lib-containers-storage-overlay-13257ec7090539adfee3f2c0a5b9c113893505a88f17fcec20201aff951618c3-merged.mount: Deactivated successfully. Feb 23 04:46:42 localhost podman[241086]: time="2026-02-23T09:46:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:46:42 localhost podman[241086]: @ - - [23/Feb/2026:09:46:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:46:42 localhost podman[241086]: @ - - [23/Feb/2026:09:46:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17783 "" "Go-http-client/1.1" Feb 23 04:46:42 localhost ceph-mon[287329]: Reconfiguring mon.np0005626465 (monmap changed)... Feb 23 04:46:42 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:46:42 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:42 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:42 localhost ceph-mon[287329]: Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:46:42 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:42 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:46:42 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:46:44 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:44 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:44 localhost ceph-mon[287329]: Reconfiguring osd.1 (monmap changed)... Feb 23 04:46:44 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:46:44 localhost ceph-mon[287329]: Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:46:44 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:44 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:44 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:44 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:46:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:46:45 localhost podman[292919]: 2026-02-23 09:46:45.015249068 +0000 UTC m=+0.087770741 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:46:45 localhost podman[292919]: 2026-02-23 09:46:45.024335516 +0000 UTC m=+0.096857159 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:46:45 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:46:45 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:45 localhost ceph-mon[287329]: Reconfiguring osd.4 (monmap changed)... Feb 23 04:46:45 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:46:45 localhost ceph-mon[287329]: Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:46:45 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:45 localhost podman[292920]: 2026-02-23 09:46:45.107168276 +0000 UTC m=+0.177590955 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7) Feb 23 04:46:45 localhost podman[292920]: 2026-02-23 09:46:45.144072374 +0000 UTC m=+0.214495033 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, release=1770267347, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal) Feb 23 04:46:45 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:46:46 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:46 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:46 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:46 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:46:46 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:46 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:46:46 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:46:46 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:46 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:46 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:46 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:46:46 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) Feb 23 04:46:46 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/145607622' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch Feb 23 04:46:47 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:46:47 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:46:47 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:47 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:47 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:46:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:46:48.303 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:46:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:46:48.304 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:46:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:46:48.304 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:46:48 localhost ceph-mon[287329]: Reconfiguring mon.np0005626466 (monmap changed)... Feb 23 04:46:48 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:46:48 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:48 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:48 localhost ceph-mon[287329]: from='mgr.24104 172.18.0.104:0/1738626939' entity='mgr.np0005626460.fyrady' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:46:48 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:48 localhost ceph-mon[287329]: from='mgr.24104 ' entity='mgr.np0005626460.fyrady' Feb 23 04:46:48 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e83 e83: 6 total, 6 up, 6 in Feb 23 04:46:48 localhost ceph-mgr[285904]: mgr handle_mgr_map Activating! Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr handle_mgr_map I am now activating Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626460"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626460"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626461"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626461"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626465"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626465"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626466"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626466"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005626465.drvnoy"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mds metadata", "who": "mds.np0005626465.drvnoy"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon).mds e17 all = 0 Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005626466.vaywlp"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mds metadata", "who": "mds.np0005626466.vaywlp"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon).mds e17 all = 0 Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005626463.qcthuc"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mds metadata", "who": "mds.np0005626463.qcthuc"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon).mds e17 all = 0 Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626465.hlpkwo", "id": "np0005626465.hlpkwo"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr metadata", "who": "np0005626465.hlpkwo", "id": "np0005626465.hlpkwo"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626463.wtksup", "id": "np0005626463.wtksup"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr metadata", "who": "np0005626463.wtksup", "id": "np0005626463.wtksup"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626466.nisqfq", "id": "np0005626466.nisqfq"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr metadata", "who": "np0005626466.nisqfq", "id": "np0005626466.nisqfq"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626459.pmtxxl", "id": "np0005626459.pmtxxl"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr metadata", "who": "np0005626459.pmtxxl", "id": "np0005626459.pmtxxl"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626461.lrfquh", "id": "np0005626461.lrfquh"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr metadata", "who": "np0005626461.lrfquh", "id": "np0005626461.lrfquh"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 0} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 1} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 2} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 3} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 4} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 5} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mds metadata"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mds metadata"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon).mds e17 all = 1 Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "osd metadata"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mon metadata"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata"} : dispatch Feb 23 04:46:49 localhost ceph-mgr[285904]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: balancer Feb 23 04:46:49 localhost ceph-mgr[285904]: [balancer INFO root] Starting Feb 23 04:46:49 localhost ceph-mgr[285904]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:46:49 Feb 23 04:46:49 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:46:49 localhost ceph-mgr[285904]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: cephadm Feb 23 04:46:49 localhost ceph-mgr[285904]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: crash Feb 23 04:46:49 localhost ceph-mgr[285904]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: devicehealth Feb 23 04:46:49 localhost ceph-mgr[285904]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: iostat Feb 23 04:46:49 localhost ceph-mgr[285904]: [devicehealth INFO root] Starting Feb 23 04:46:49 localhost ceph-mgr[285904]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: nfs Feb 23 04:46:49 localhost ceph-mgr[285904]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: orchestrator Feb 23 04:46:49 localhost ceph-mgr[285904]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: pg_autoscaler Feb 23 04:46:49 localhost systemd[1]: session-64.scope: Deactivated successfully. Feb 23 04:46:49 localhost systemd[1]: session-64.scope: Consumed 10.251s CPU time. Feb 23 04:46:49 localhost systemd-logind[759]: Session 64 logged out. Waiting for processes to exit. Feb 23 04:46:49 localhost systemd-logind[759]: Removed session 64. Feb 23 04:46:49 localhost ceph-mgr[285904]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: progress Feb 23 04:46:49 localhost ceph-mgr[285904]: [progress INFO root] Loading... Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Feb 23 04:46:49 localhost ceph-mgr[285904]: [progress INFO root] Loaded OSDMap, ready. Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] recovery thread starting Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] starting setup Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: rbd_support Feb 23 04:46:49 localhost ceph-mgr[285904]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: restful Feb 23 04:46:49 localhost ceph-mgr[285904]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: status Feb 23 04:46:49 localhost ceph-mgr[285904]: [restful INFO root] server_addr: :: server_port: 8003 Feb 23 04:46:49 localhost ceph-mgr[285904]: [restful WARNING root] server not running: no certificate configured Feb 23 04:46:49 localhost ceph-mgr[285904]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: telemetry Feb 23 04:46:49 localhost ceph-mgr[285904]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/mirror_snapshot_schedule"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/mirror_snapshot_schedule"} : dispatch Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 04:46:49 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 04:46:49 localhost ceph-mgr[285904]: mgr load Constructed class from module: volumes Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] PerfHandler: starting Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_task_task: vms, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:46:49.122+0000 7f336b1b5640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:46:49.122+0000 7f336b1b5640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:46:49.122+0000 7f336b1b5640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:46:49.122+0000 7f336b1b5640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:46:49.122+0000 7f336b1b5640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:46:49.127+0000 7f33671ad640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:46:49.127+0000 7f33671ad640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:46:49.127+0000 7f33671ad640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:46:49.127+0000 7f33671ad640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:46:49.127+0000 7f33671ad640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_task_task: volumes, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_task_task: images, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_task_task: backups, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] TaskHandler: starting Feb 23 04:46:49 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/trash_purge_schedule"} v 0) Feb 23 04:46:49 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/trash_purge_schedule"} : dispatch Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Feb 23 04:46:49 localhost ceph-mgr[285904]: [rbd_support INFO root] setup complete Feb 23 04:46:49 localhost sshd[293122]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:46:49 localhost systemd-logind[759]: New session 65 of user ceph-admin. Feb 23 04:46:49 localhost systemd[1]: Started Session 65 of User ceph-admin. Feb 23 04:46:49 localhost ceph-mon[287329]: from='client.? 172.18.0.200:0/3611724471' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: Activating manager daemon np0005626465.hlpkwo Feb 23 04:46:49 localhost ceph-mon[287329]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 23 04:46:49 localhost ceph-mon[287329]: Manager daemon np0005626465.hlpkwo is now available Feb 23 04:46:49 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/mirror_snapshot_schedule"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/mirror_snapshot_schedule"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/trash_purge_schedule"} : dispatch Feb 23 04:46:49 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/trash_purge_schedule"} : dispatch Feb 23 04:46:50 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:46:50 localhost systemd[1]: tmp-crun.VHKHWx.mount: Deactivated successfully. Feb 23 04:46:50 localhost podman[293229]: 2026-02-23 09:46:50.334655726 +0000 UTC m=+0.089826235 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git) Feb 23 04:46:50 localhost ceph-mgr[285904]: [cephadm INFO cherrypy.error] [23/Feb/2026:09:46:50] ENGINE Bus STARTING Feb 23 04:46:50 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : [23/Feb/2026:09:46:50] ENGINE Bus STARTING Feb 23 04:46:50 localhost podman[293229]: 2026-02-23 09:46:50.429295807 +0000 UTC m=+0.184466346 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph) Feb 23 04:46:50 localhost ceph-mgr[285904]: [cephadm INFO cherrypy.error] [23/Feb/2026:09:46:50] ENGINE Serving on https://172.18.0.107:7150 Feb 23 04:46:50 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : [23/Feb/2026:09:46:50] ENGINE Serving on https://172.18.0.107:7150 Feb 23 04:46:50 localhost ceph-mgr[285904]: [cephadm INFO cherrypy.error] [23/Feb/2026:09:46:50] ENGINE Client ('172.18.0.107', 34506) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 23 04:46:50 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : [23/Feb/2026:09:46:50] ENGINE Client ('172.18.0.107', 34506) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 23 04:46:50 localhost ceph-mgr[285904]: [cephadm INFO cherrypy.error] [23/Feb/2026:09:46:50] ENGINE Serving on http://172.18.0.107:8765 Feb 23 04:46:50 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : [23/Feb/2026:09:46:50] ENGINE Serving on http://172.18.0.107:8765 Feb 23 04:46:50 localhost ceph-mgr[285904]: [cephadm INFO cherrypy.error] [23/Feb/2026:09:46:50] ENGINE Bus STARTED Feb 23 04:46:50 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : [23/Feb/2026:09:46:50] ENGINE Bus STARTED Feb 23 04:46:50 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:46:50 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:46:50 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Feb 23 04:46:50 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:50.973290) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:46:50 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Feb 23 04:46:50 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840010973366, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1530, "num_deletes": 255, "total_data_size": 8169667, "memory_usage": 8638752, "flush_reason": "Manual Compaction"} Feb 23 04:46:50 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840011002607, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 5001842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13959, "largest_seqno": 15484, "table_properties": {"data_size": 4994876, "index_size": 3855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17591, "raw_average_key_size": 21, "raw_value_size": 4979849, "raw_average_value_size": 6147, "num_data_blocks": 160, "num_entries": 810, "num_filter_entries": 810, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771839985, "oldest_key_time": 1771839985, "file_creation_time": 1771840010, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 29707 microseconds, and 8854 cpu microseconds. Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.003009) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 5001842 bytes OK Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.003136) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.005071) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.005093) EVENT_LOG_v1 {"time_micros": 1771840011005087, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.005118) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 8161654, prev total WAL file size 8161654, number of live WAL files 2. Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.007495) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303231' seq:72057594037927935, type:22 .. '6B760031323734' seq:0, type:0; will stop at (end) Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(4884KB)], [18(12MB)] Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840011007535, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18024997, "oldest_snapshot_seqno": -1} Feb 23 04:46:51 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:46:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:46:51 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:51 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:51 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10290 keys, 16994597 bytes, temperature: kUnknown Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840011152389, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 16994597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16934585, "index_size": 33223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25733, "raw_key_size": 275303, "raw_average_key_size": 26, "raw_value_size": 16757019, "raw_average_value_size": 1628, "num_data_blocks": 1263, "num_entries": 10290, "num_filter_entries": 10290, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771839874, "oldest_key_time": 0, "file_creation_time": 1771840011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.152798) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 16994597 bytes Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.155862) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.3 rd, 117.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.8, 12.4 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(7.0) write-amplify(3.4) OK, records in: 10812, records dropped: 522 output_compression: NoCompression Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.155896) EVENT_LOG_v1 {"time_micros": 1771840011155882, "job": 8, "event": "compaction_finished", "compaction_time_micros": 145026, "compaction_time_cpu_micros": 48316, "output_level": 6, "num_output_files": 1, "total_output_size": 16994597, "num_input_records": 10812, "num_output_records": 10290, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840011156784, "job": 8, "event": "table_file_deletion", "file_number": 20} Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840011158931, "job": 8, "event": "table_file_deletion", "file_number": 18} Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.007390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.159003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.159012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.159016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.159020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:51 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:46:51.159024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:46:51 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:46:51 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:51 localhost ceph-mgr[285904]: [devicehealth INFO root] Check health Feb 23 04:46:51 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:46:51 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:46:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:46:51 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:51 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:51 localhost systemd[1]: tmp-crun.maOb39.mount: Deactivated successfully. Feb 23 04:46:51 localhost podman[293418]: 2026-02-23 09:46:51.504723759 +0000 UTC m=+0.098303894 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller) Feb 23 04:46:51 localhost podman[293418]: 2026-02-23 09:46:51.610839821 +0000 UTC m=+0.204419916 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 04:46:51 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:46:52 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:46:52 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:46:52 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:52 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:52 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:46:53 localhost ceph-mon[287329]: [23/Feb/2026:09:46:50] ENGINE Bus STARTING Feb 23 04:46:53 localhost ceph-mon[287329]: [23/Feb/2026:09:46:50] ENGINE Serving on https://172.18.0.107:7150 Feb 23 04:46:53 localhost ceph-mon[287329]: [23/Feb/2026:09:46:50] ENGINE Client ('172.18.0.107', 34506) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 23 04:46:53 localhost ceph-mon[287329]: [23/Feb/2026:09:46:50] ENGINE Serving on http://172.18.0.107:8765 Feb 23 04:46:53 localhost ceph-mon[287329]: [23/Feb/2026:09:46:50] ENGINE Bus STARTED Feb 23 04:46:53 localhost ceph-mon[287329]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 23 04:46:53 localhost ceph-mon[287329]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 23 04:46:53 localhost ceph-mon[287329]: Cluster is now healthy Feb 23 04:46:53 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:53 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:53 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:53 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:53 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:53 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:53 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:53 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005626460", "name": "osd_memory_target"} v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd/host:np0005626460", "name": "osd_memory_target"} : dispatch Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} : dispatch Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm INFO root] Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm INFO root] Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm INFO root] Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:53 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:46:53 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:53 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:53 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mgr.np0005626460.fyrady 172.18.0.104:0/747321124; not ready for session (expect reconnect) Feb 23 04:46:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:54 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:54 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626460.fyrady", "id": "np0005626460.fyrady"} v 0) Feb 23 04:46:54 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr metadata", "who": "np0005626460.fyrady", "id": "np0005626460.fyrady"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd/host:np0005626460", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd/host:np0005626460", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:46:54 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:46:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626460.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626460.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626461.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626461.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:46:55 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 04:46:55 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 04:46:55 localhost ceph-mon[287329]: Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 04:46:55 localhost ceph-mon[287329]: Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:46:55 localhost ceph-mon[287329]: Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 04:46:55 localhost ceph-mon[287329]: Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:46:55 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:55 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:55 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:55 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:55 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:46:55 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:55 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:55 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:55 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:55 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:46:55 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:55 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:46:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:56 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:56 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:56 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:56 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:46:56 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:56 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:56 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:56 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:56 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:56 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:56 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:46:56 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 82973125-2e54-42eb-b11e-24ab8de99eff (Updating node-proxy deployment (+5 -> 5)) Feb 23 04:46:56 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 82973125-2e54-42eb-b11e-24ab8de99eff (Updating node-proxy deployment (+5 -> 5)) Feb 23 04:46:56 localhost ceph-mgr[285904]: [progress INFO root] Completed event 82973125-2e54-42eb-b11e-24ab8de99eff (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:46:56 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005626460 (monmap changed)... Feb 23 04:46:56 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005626460 (monmap changed)... Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:46:56 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:56 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:56 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005626460 on np0005626460.localdomain Feb 23 04:46:56 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005626460 on np0005626460.localdomain Feb 23 04:46:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Feb 23 04:46:57 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:57 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:57 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:57 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:57 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:46:57 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:57 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:57 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:57 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:57 localhost ceph-mon[287329]: Reconfiguring mon.np0005626460 (monmap changed)... Feb 23 04:46:57 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:46:57 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626460 on np0005626460.localdomain Feb 23 04:46:57 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:46:57 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:46:57 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005626461 (monmap changed)... Feb 23 04:46:57 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005626461 (monmap changed)... Feb 23 04:46:57 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:46:57 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:46:57 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:46:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:46:57 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:57 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:57 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005626461 on np0005626461.localdomain Feb 23 04:46:57 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005626461 on np0005626461.localdomain Feb 23 04:46:58 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:46:58 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:46:58 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005626463 (monmap changed)... Feb 23 04:46:58 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005626463 (monmap changed)... Feb 23 04:46:58 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:46:58 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:46:58 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:58 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:58 localhost ceph-mon[287329]: Reconfiguring mon.np0005626461 (monmap changed)... Feb 23 04:46:58 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:46:58 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626461 on np0005626461.localdomain Feb 23 04:46:58 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:58 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:46:58 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:46:58 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:46:58 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:46:58 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:46:58 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:46:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Feb 23 04:46:59 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:46:59 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:46:59 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:46:59 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:46:59 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:46:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:46:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:47:00 localhost podman[294186]: 2026-02-23 09:47:00.00346978 +0000 UTC m=+0.073549307 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:47:00 localhost podman[294186]: 2026-02-23 09:47:00.011835176 +0000 UTC m=+0.081914713 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 23 04:47:00 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:47:00 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:47:00 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:47:00 localhost systemd[1]: tmp-crun.QYYMer.mount: Deactivated successfully. Feb 23 04:47:00 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:00 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:00 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:47:00 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:00 localhost podman[294185]: 2026-02-23 09:47:00.103597199 +0000 UTC m=+0.173782380 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:47:00 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:47:00 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 51dcb99c-3b7d-40af-8976-73c6abd996b8 (Updating node-proxy deployment (+5 -> 5)) Feb 23 04:47:00 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 51dcb99c-3b7d-40af-8976-73c6abd996b8 (Updating node-proxy deployment (+5 -> 5)) Feb 23 04:47:00 localhost ceph-mgr[285904]: [progress INFO root] Completed event 51dcb99c-3b7d-40af-8976-73c6abd996b8 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Feb 23 04:47:00 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:47:00 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:47:00 localhost podman[294185]: 2026-02-23 09:47:00.136982919 +0000 UTC m=+0.207168130 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Feb 23 04:47:00 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:47:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.44159 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Feb 23 04:47:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Feb 23 04:47:01 localhost ceph-mon[287329]: Reconfiguring mon.np0005626463 (monmap changed)... Feb 23 04:47:01 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:47:01 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:01 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:01 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:01 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:01 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:01 localhost openstack_network_exporter[243519]: ERROR 09:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:47:01 localhost openstack_network_exporter[243519]: Feb 23 04:47:01 localhost openstack_network_exporter[243519]: ERROR 09:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:47:01 localhost openstack_network_exporter[243519]: Feb 23 04:47:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 23 04:47:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.44169 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005626460", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:47:04 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1830357104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:47:04 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1830357104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:47:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.26961 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005626460"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Feb 23 04:47:04 localhost ceph-mgr[285904]: [cephadm INFO root] Remove daemons mon.np0005626460 Feb 23 04:47:04 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005626460 Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "quorum_status"} v 0) Feb 23 04:47:04 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "quorum_status"} : dispatch Feb 23 04:47:04 localhost ceph-mgr[285904]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005626460: new quorum should be ['np0005626461', 'np0005626466', 'np0005626465', 'np0005626463'] (from ['np0005626461', 'np0005626466', 'np0005626465', 'np0005626463']) Feb 23 04:47:04 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005626460: new quorum should be ['np0005626461', 'np0005626466', 'np0005626465', 'np0005626463'] (from ['np0005626461', 'np0005626466', 'np0005626465', 'np0005626463']) Feb 23 04:47:04 localhost ceph-mgr[285904]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005626460 from monmap... Feb 23 04:47:04 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Removing monitor np0005626460 from monmap... Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e9 handle_command mon_command({"prefix": "mon rm", "name": "np0005626460"} v 0) Feb 23 04:47:04 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon rm", "name": "np0005626460"} : dispatch Feb 23 04:47:04 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005626460 from np0005626460.localdomain -- ports [] Feb 23 04:47:04 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005626460 from np0005626460.localdomain -- ports [] Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@3(peon) e10 my rank is now 2 (was 3) Feb 23 04:47:04 localhost ceph-mgr[285904]: client.34313 ms_handle_reset on v2:172.18.0.107:3300/0 Feb 23 04:47:04 localhost ceph-mgr[285904]: client.34308 ms_handle_reset on v2:172.18.0.107:3300/0 Feb 23 04:47:04 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0 Feb 23 04:47:04 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0 Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@2(probing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626461"} v 0) Feb 23 04:47:04 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626461"} : dispatch Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@2(probing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:47:04 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@2(probing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626465"} v 0) Feb 23 04:47:04 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626465"} : dispatch Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@2(probing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626466"} v 0) Feb 23 04:47:04 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626466"} : dispatch Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@2(probing) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:04 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:04 localhost ceph-mds[284726]: --2- [v2:172.18.0.107:6808/2939113664,v1:172.18.0.107:6809/2939113664] >> [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] conn(0x55568dec4400 0x55568ce31180 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2 Feb 23 04:47:04 localhost ceph-mon[287329]: log_channel(cluster) log [INF] : mon.np0005626465 calling monitor election Feb 23 04:47:04 localhost ceph-mon[287329]: paxos.2).electionLogic(42) init, last seen epoch 42 Feb 23 04:47:04 localhost ceph-mgr[285904]: --2- 172.18.0.107:0/3660319406 >> [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] conn(0x55efa620f800 0x55efa621a580 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2 Feb 23 04:47:04 localhost ceph-mgr[285904]: client.34308 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 23 04:47:04 localhost ceph-osd[31709]: --2- [v2:172.18.0.107:6800/2727625058,v1:172.18.0.107:6801/2727625058] >> [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] conn(0x559081170400 0x559083435b80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2 Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@2(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:47:04 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:47:04 localhost ceph-mon[287329]: mon.np0005626465@2(electing) e10 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:47:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:47:04 localhost podman[294240]: 2026-02-23 09:47:04.99727052 +0000 UTC m=+0.076116326 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:47:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 23 04:47:05 localhost podman[294240]: 2026-02-23 09:47:05.032893719 +0000 UTC m=+0.111739525 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:47:05 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:47:06 localhost ceph-mon[287329]: mon.np0005626465@2(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:47:06 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:47:06 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:47:06 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:06 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:06 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:06 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:06 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:06 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:06 localhost ceph-mon[287329]: Remove daemons mon.np0005626460 Feb 23 04:47:06 localhost ceph-mon[287329]: Safe to remove mon.np0005626460: new quorum should be ['np0005626461', 'np0005626466', 'np0005626465', 'np0005626463'] (from ['np0005626461', 'np0005626466', 'np0005626465', 'np0005626463']) Feb 23 04:47:06 localhost ceph-mon[287329]: Removing monitor np0005626460 from monmap... Feb 23 04:47:06 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon rm", "name": "np0005626460"} : dispatch Feb 23 04:47:06 localhost ceph-mon[287329]: Removing daemon mon.np0005626460 from np0005626460.localdomain -- ports [] Feb 23 04:47:06 localhost ceph-mon[287329]: mon.np0005626463 calling monitor election Feb 23 04:47:06 localhost ceph-mon[287329]: mon.np0005626466 calling monitor election Feb 23 04:47:06 localhost ceph-mon[287329]: mon.np0005626465 calling monitor election Feb 23 04:47:06 localhost ceph-mon[287329]: mon.np0005626461 calling monitor election Feb 23 04:47:06 localhost ceph-mon[287329]: mon.np0005626461 is new leader, mons np0005626461,np0005626466,np0005626465,np0005626463 in quorum (ranks 0,1,2,3) Feb 23 04:47:06 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:06 localhost ceph-mon[287329]: overall HEALTH_OK Feb 23 04:47:06 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 23 04:47:07 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.44187 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005626460.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Feb 23 04:47:07 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:47:07 localhost ceph-mgr[285904]: [cephadm INFO root] Removed label mon from host np0005626460.localdomain Feb 23 04:47:07 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Removed label mon from host np0005626460.localdomain Feb 23 04:47:07 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:07 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:07 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:07 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:07 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:07 localhost ceph-mon[287329]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mon[287329]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mon[287329]: Updating np0005626460.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mon[287329]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:07 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:07 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:47:07 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:47:07 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:47:07 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:47:07 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:47:08 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:47:08 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:47:08 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:47:08 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:47:08 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:47:08 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:47:08 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 53182181-df29-402e-9769-4fcdb9908023 (Updating node-proxy deployment (+5 -> 5)) Feb 23 04:47:08 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 53182181-df29-402e-9769-4fcdb9908023 (Updating node-proxy deployment (+5 -> 5)) Feb 23 04:47:08 localhost ceph-mgr[285904]: [progress INFO root] Completed event 53182181-df29-402e-9769-4fcdb9908023 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Feb 23 04:47:08 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:47:08 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:47:08 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005626460 (monmap changed)... Feb 23 04:47:08 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005626460 (monmap changed)... Feb 23 04:47:08 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:47:08 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:08 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:08 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:08 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005626460 on np0005626460.localdomain Feb 23 04:47:08 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005626460 on np0005626460.localdomain Feb 23 04:47:08 localhost ceph-mon[287329]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:08 localhost ceph-mon[287329]: Removed label mon from host np0005626460.localdomain Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:08 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626460", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:09 localhost ceph-mon[287329]: mon.np0005626465@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:47:09 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.26996 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005626460.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Feb 23 04:47:09 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:47:09 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:47:09 localhost ceph-mgr[285904]: [cephadm INFO root] Removed label mgr from host np0005626460.localdomain Feb 23 04:47:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005626460.localdomain Feb 23 04:47:09 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:47:09 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005626460.fyrady (monmap changed)... Feb 23 04:47:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005626460.fyrady (monmap changed)... Feb 23 04:47:09 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:47:09 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:09 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:47:09 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:47:09 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:09 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:09 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005626460.fyrady on np0005626460.localdomain Feb 23 04:47:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005626460.fyrady on np0005626460.localdomain Feb 23 04:47:09 localhost ceph-mon[287329]: Reconfiguring crash.np0005626460 (monmap changed)... Feb 23 04:47:09 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626460 on np0005626460.localdomain Feb 23 04:47:09 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:09 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:09 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:09 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:09 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626460.fyrady", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:10 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain.devices.0}] v 0) Feb 23 04:47:10 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626460.localdomain}] v 0) Feb 23 04:47:10 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005626461 (monmap changed)... Feb 23 04:47:10 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005626461 (monmap changed)... Feb 23 04:47:10 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:47:10 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:47:10 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:47:10 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:47:10 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:10 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:10 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005626461 on np0005626461.localdomain Feb 23 04:47:10 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005626461 on np0005626461.localdomain Feb 23 04:47:10 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.44207 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005626460.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Feb 23 04:47:10 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:47:10 localhost ceph-mgr[285904]: [cephadm INFO root] Removed label _admin from host np0005626460.localdomain Feb 23 04:47:10 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005626460.localdomain Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:10.950970) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840030951030, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1163, "num_deletes": 260, "total_data_size": 2572107, "memory_usage": 2632544, "flush_reason": "Manual Compaction"} Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840030963658, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1548054, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15489, "largest_seqno": 16647, "table_properties": {"data_size": 1542446, "index_size": 2823, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14455, "raw_average_key_size": 21, "raw_value_size": 1530230, "raw_average_value_size": 2294, "num_data_blocks": 119, "num_entries": 667, "num_filter_entries": 667, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840011, "oldest_key_time": 1771840011, "file_creation_time": 1771840030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 12734 microseconds, and 4402 cpu microseconds. Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:10.963704) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1548054 bytes OK Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:10.963728) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:10.965888) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:10.965908) EVENT_LOG_v1 {"time_micros": 1771840030965901, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:10.965929) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2565902, prev total WAL file size 2565902, number of live WAL files 2. Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:10.966769) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353133' seq:72057594037927935, type:22 .. '6C6F676D0033373639' seq:0, type:0; will stop at (end) Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1511KB)], [21(16MB)] Feb 23 04:47:10 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840030966839, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 18542651, "oldest_snapshot_seqno": -1} Feb 23 04:47:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10402 keys, 18390883 bytes, temperature: kUnknown Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840031056069, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 18390883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18328406, "index_size": 35368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 279469, "raw_average_key_size": 26, "raw_value_size": 18147232, "raw_average_value_size": 1744, "num_data_blocks": 1353, "num_entries": 10402, "num_filter_entries": 10402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771839874, "oldest_key_time": 0, "file_creation_time": 1771840030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:11.056527) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 18390883 bytes Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:11.058727) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.4 rd, 205.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 16.2 +0.0 blob) out(17.5 +0.0 blob), read-write-amplify(23.9) write-amplify(11.9) OK, records in: 10957, records dropped: 555 output_compression: NoCompression Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:11.058760) EVENT_LOG_v1 {"time_micros": 1771840031058745, "job": 10, "event": "compaction_finished", "compaction_time_micros": 89410, "compaction_time_cpu_micros": 48396, "output_level": 6, "num_output_files": 1, "total_output_size": 18390883, "num_input_records": 10957, "num_output_records": 10402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840031059273, "job": 10, "event": "table_file_deletion", "file_number": 23} Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840031061822, "job": 10, "event": "table_file_deletion", "file_number": 21} Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:10.966676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:11.061997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:11.062006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:11.062009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:11.062012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:11 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:11.062016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:11 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:47:11 localhost ceph-mon[287329]: Removed label mgr from host np0005626460.localdomain Feb 23 04:47:11 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626460.fyrady (monmap changed)... Feb 23 04:47:11 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626460.fyrady on np0005626460.localdomain Feb 23 04:47:11 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:11 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:11 localhost ceph-mon[287329]: Reconfiguring mon.np0005626461 (monmap changed)... Feb 23 04:47:11 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:47:11 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626461 on np0005626461.localdomain Feb 23 04:47:11 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:11 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:47:11 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005626461.lrfquh (monmap changed)... Feb 23 04:47:11 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005626461.lrfquh (monmap changed)... Feb 23 04:47:11 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:47:11 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:11 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:47:11 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:47:11 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:11 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:11 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005626461.lrfquh on np0005626461.localdomain Feb 23 04:47:11 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005626461.lrfquh on np0005626461.localdomain Feb 23 04:47:11 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:47:11 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:47:11 localhost nova_compute[280321]: 2026-02-23 09:47:11.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:47:11 localhost nova_compute[280321]: 2026-02-23 09:47:11.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:47:11 localhost nova_compute[280321]: 2026-02-23 09:47:11.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:47:11 localhost nova_compute[280321]: 2026-02-23 09:47:11.905 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:47:11 localhost nova_compute[280321]: 2026-02-23 09:47:11.906 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:47:11 localhost nova_compute[280321]: 2026-02-23 09:47:11.924 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:47:11 localhost nova_compute[280321]: 2026-02-23 09:47:11.925 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:47:11 localhost nova_compute[280321]: 2026-02-23 09:47:11.925 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:47:11 localhost nova_compute[280321]: 2026-02-23 09:47:11.925 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:47:11 localhost nova_compute[280321]: 2026-02-23 09:47:11.926 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:47:12 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:47:12 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:47:12 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005626461 (monmap changed)... Feb 23 04:47:12 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005626461 (monmap changed)... Feb 23 04:47:12 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:47:12 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:12 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:12 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:12 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005626461 on np0005626461.localdomain Feb 23 04:47:12 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005626461 on np0005626461.localdomain Feb 23 04:47:12 localhost ceph-mon[287329]: Removed label _admin from host np0005626460.localdomain Feb 23 04:47:12 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:12 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:12 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:12 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:12 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:12 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:12 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:12 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:12 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:12 localhost nova_compute[280321]: 2026-02-23 09:47:12.389 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:47:12 localhost nova_compute[280321]: 2026-02-23 09:47:12.589 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:47:12 localhost nova_compute[280321]: 2026-02-23 09:47:12.592 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12343MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:47:12 localhost nova_compute[280321]: 2026-02-23 09:47:12.593 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:47:12 localhost nova_compute[280321]: 2026-02-23 09:47:12.593 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:47:12 localhost nova_compute[280321]: 2026-02-23 09:47:12.644 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:47:12 localhost nova_compute[280321]: 2026-02-23 09:47:12.644 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:47:12 localhost nova_compute[280321]: 2026-02-23 09:47:12.661 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:47:12 localhost podman[241086]: time="2026-02-23T09:47:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:47:12 localhost podman[241086]: @ - - [23/Feb/2026:09:47:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:47:12 localhost podman[241086]: @ - - [23/Feb/2026:09:47:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17789 "" "Go-http-client/1.1" Feb 23 04:47:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:13 localhost nova_compute[280321]: 2026-02-23 09:47:13.076 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:47:13 localhost nova_compute[280321]: 2026-02-23 09:47:13.082 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:47:13 localhost nova_compute[280321]: 2026-02-23 09:47:13.100 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:47:13 localhost nova_compute[280321]: 2026-02-23 09:47:13.102 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:47:13 localhost nova_compute[280321]: 2026-02-23 09:47:13.103 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:47:13 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain.devices.0}] v 0) Feb 23 04:47:13 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626461.localdomain}] v 0) Feb 23 04:47:13 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:47:13 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:47:13 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:47:13 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:13 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:13 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:13 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:47:13 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:47:13 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626461.lrfquh (monmap changed)... Feb 23 04:47:13 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626461.lrfquh on np0005626461.localdomain Feb 23 04:47:13 localhost ceph-mon[287329]: Reconfiguring crash.np0005626461 (monmap changed)... Feb 23 04:47:13 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626461 on np0005626461.localdomain Feb 23 04:47:13 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:13 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:13 localhost ceph-mon[287329]: Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:47:13 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:13 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:13 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:47:14 localhost ceph-mon[287329]: mon.np0005626465@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:47:14 localhost nova_compute[280321]: 2026-02-23 09:47:14.090 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:47:14 localhost nova_compute[280321]: 2026-02-23 09:47:14.112 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:47:14 localhost nova_compute[280321]: 2026-02-23 09:47:14.112 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:47:14 localhost nova_compute[280321]: 2026-02-23 09:47:14.113 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:47:14 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:47:14 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:47:14 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Feb 23 04:47:14 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Feb 23 04:47:14 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Feb 23 04:47:14 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:47:14 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:14 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:14 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:47:14 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:47:14 localhost nova_compute[280321]: 2026-02-23 09:47:14.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:47:14 localhost nova_compute[280321]: 2026-02-23 09:47:14.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:47:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:15 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:15 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:15 localhost ceph-mon[287329]: Reconfiguring osd.2 (monmap changed)... Feb 23 04:47:15 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:47:15 localhost ceph-mon[287329]: Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:47:15 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:47:15 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:47:15 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Feb 23 04:47:15 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Feb 23 04:47:15 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Feb 23 04:47:15 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:47:15 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:15 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:15 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:47:15 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:47:15 localhost nova_compute[280321]: 2026-02-23 09:47:15.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:47:15 localhost nova_compute[280321]: 2026-02-23 09:47:15.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:47:15 localhost nova_compute[280321]: 2026-02-23 09:47:15.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:47:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:47:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:47:16 localhost podman[294644]: 2026-02-23 09:47:16.001526478 +0000 UTC m=+0.079633073 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:47:16 localhost podman[294644]: 2026-02-23 09:47:16.014906167 +0000 UTC m=+0.093012772 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:47:16 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:47:16 localhost podman[294645]: 2026-02-23 09:47:16.067656539 +0000 UTC m=+0.140701509 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, release=1770267347, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, vcs-type=git, io.buildah.version=1.33.7, distribution-scope=public) Feb 23 04:47:16 localhost podman[294645]: 2026-02-23 09:47:16.110939441 +0000 UTC m=+0.183984441 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, architecture=x86_64, version=9.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2026-02-05T04:57:10Z, release=1770267347) Feb 23 04:47:16 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:47:16 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:47:16 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:47:16 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:16 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:16 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:47:16 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:16 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:47:16 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:47:16 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 23 04:47:16 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:16 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:16 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:16 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:47:16 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:47:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:17 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:47:17 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:47:17 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:47:17 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:47:17 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:47:17 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:17 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:47:17 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:47:17 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:17 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:17 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:47:17 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:47:17 localhost ceph-mon[287329]: Reconfiguring osd.5 (monmap changed)... Feb 23 04:47:17 localhost ceph-mon[287329]: Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:47:17 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:17 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:47:17 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:17 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:17 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:47:17 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:17 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:17 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:47:17 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:17 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:17 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:47:18 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:47:18 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:47:18 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005626463 (monmap changed)... Feb 23 04:47:18 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005626463 (monmap changed)... Feb 23 04:47:18 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:47:18 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:47:18 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:47:18 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:47:18 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:18 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:18 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:47:18 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:47:18 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:18 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:18 localhost ceph-mon[287329]: Reconfiguring mon.np0005626463 (monmap changed)... Feb 23 04:47:18 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:47:18 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:47:18 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:47:18 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:47:18 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:47:18 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:47:18 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:47:18 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:18 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:18 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:18 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:47:18 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:47:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:19 localhost ceph-mon[287329]: mon.np0005626465@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:47:19 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:47:19 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:47:19 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:47:19 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:47:19 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:47:19 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:47:19 localhost podman[294741]: Feb 23 04:47:19 localhost podman[294741]: 2026-02-23 09:47:19.611080644 +0000 UTC m=+0.073758705 container create f8181cdeec38d22633e70c3e525d9606c12f2a2bc20967db892479de36e72225 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_visvesvaraya, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, maintainer=Guillaume Abrioux , release=1770267347, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 04:47:19 localhost systemd[1]: Started libpod-conmon-f8181cdeec38d22633e70c3e525d9606c12f2a2bc20967db892479de36e72225.scope. Feb 23 04:47:19 localhost systemd[1]: Started libcrun container. Feb 23 04:47:19 localhost podman[294741]: 2026-02-23 09:47:19.579471547 +0000 UTC m=+0.042149608 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:19 localhost podman[294741]: 2026-02-23 09:47:19.686495918 +0000 UTC m=+0.149174009 container init f8181cdeec38d22633e70c3e525d9606c12f2a2bc20967db892479de36e72225 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_visvesvaraya, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.42.2, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1770267347, GIT_CLEAN=True, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, ceph=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph) Feb 23 04:47:19 localhost systemd[1]: tmp-crun.7dfD37.mount: Deactivated successfully. Feb 23 04:47:19 localhost podman[294741]: 2026-02-23 09:47:19.700870107 +0000 UTC m=+0.163548168 container start f8181cdeec38d22633e70c3e525d9606c12f2a2bc20967db892479de36e72225 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_visvesvaraya, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_CLEAN=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, distribution-scope=public, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z) Feb 23 04:47:19 localhost romantic_visvesvaraya[294757]: 167 167 Feb 23 04:47:19 localhost podman[294741]: 2026-02-23 09:47:19.701379772 +0000 UTC m=+0.164057873 container attach f8181cdeec38d22633e70c3e525d9606c12f2a2bc20967db892479de36e72225 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_visvesvaraya, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, name=rhceph, release=1770267347, io.buildah.version=1.42.2, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, com.redhat.component=rhceph-container) Feb 23 04:47:19 localhost systemd[1]: libpod-f8181cdeec38d22633e70c3e525d9606c12f2a2bc20967db892479de36e72225.scope: Deactivated successfully. Feb 23 04:47:19 localhost podman[294741]: 2026-02-23 09:47:19.704357743 +0000 UTC m=+0.167035834 container died f8181cdeec38d22633e70c3e525d9606c12f2a2bc20967db892479de36e72225 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_visvesvaraya, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container) Feb 23 04:47:19 localhost podman[294762]: 2026-02-23 09:47:19.803115419 +0000 UTC m=+0.091481034 container remove f8181cdeec38d22633e70c3e525d9606c12f2a2bc20967db892479de36e72225 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_visvesvaraya, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=1770267347, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7) Feb 23 04:47:19 localhost systemd[1]: libpod-conmon-f8181cdeec38d22633e70c3e525d9606c12f2a2bc20967db892479de36e72225.scope: Deactivated successfully. Feb 23 04:47:19 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:47:19 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:47:19 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Feb 23 04:47:19 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Feb 23 04:47:19 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Feb 23 04:47:19 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:47:19 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:19 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:19 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:47:19 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:47:19 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:19 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:19 localhost ceph-mon[287329]: Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:47:19 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:19 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:19 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:47:19 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:19 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:19 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:47:20 localhost podman[294832]: Feb 23 04:47:20 localhost podman[294832]: 2026-02-23 09:47:20.443487692 +0000 UTC m=+0.055092384 container create 5a58534834ab2d0de5b8974192bf89a3cc9bde459059a7b40f0b28da56146b30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_yalow, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, build-date=2026-02-09T10:25:24Z, release=1770267347, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, description=Red Hat Ceph Storage 7) Feb 23 04:47:20 localhost systemd[1]: Started libpod-conmon-5a58534834ab2d0de5b8974192bf89a3cc9bde459059a7b40f0b28da56146b30.scope. Feb 23 04:47:20 localhost systemd[1]: Started libcrun container. Feb 23 04:47:20 localhost podman[294832]: 2026-02-23 09:47:20.504808245 +0000 UTC m=+0.116412937 container init 5a58534834ab2d0de5b8974192bf89a3cc9bde459059a7b40f0b28da56146b30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_yalow, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, release=1770267347, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:47:20 localhost podman[294832]: 2026-02-23 09:47:20.513780519 +0000 UTC m=+0.125385191 container start 5a58534834ab2d0de5b8974192bf89a3cc9bde459059a7b40f0b28da56146b30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_yalow, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, ceph=True, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347) Feb 23 04:47:20 localhost podman[294832]: 2026-02-23 09:47:20.514216832 +0000 UTC m=+0.125821514 container attach 5a58534834ab2d0de5b8974192bf89a3cc9bde459059a7b40f0b28da56146b30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_yalow, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, CEPH_POINT_RELEASE=) Feb 23 04:47:20 localhost epic_yalow[294847]: 167 167 Feb 23 04:47:20 localhost systemd[1]: libpod-5a58534834ab2d0de5b8974192bf89a3cc9bde459059a7b40f0b28da56146b30.scope: Deactivated successfully. Feb 23 04:47:20 localhost podman[294832]: 2026-02-23 09:47:20.517195393 +0000 UTC m=+0.128800095 container died 5a58534834ab2d0de5b8974192bf89a3cc9bde459059a7b40f0b28da56146b30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_yalow, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:47:20 localhost podman[294832]: 2026-02-23 09:47:20.419120847 +0000 UTC m=+0.030725539 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:20 localhost podman[294852]: 2026-02-23 09:47:20.601940432 +0000 UTC m=+0.075062913 container remove 5a58534834ab2d0de5b8974192bf89a3cc9bde459059a7b40f0b28da56146b30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_yalow, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, ceph=True, distribution-scope=public, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, RELEASE=main, name=rhceph, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=) Feb 23 04:47:20 localhost systemd[1]: libpod-conmon-5a58534834ab2d0de5b8974192bf89a3cc9bde459059a7b40f0b28da56146b30.scope: Deactivated successfully. Feb 23 04:47:20 localhost systemd[1]: var-lib-containers-storage-overlay-1f52613f7ac509d37ad7ef2eb1bac1791f29ee7fc60454aac92447218ea5bdac-merged.mount: Deactivated successfully. Feb 23 04:47:20 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:47:20 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:47:20 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Feb 23 04:47:20 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Feb 23 04:47:20 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Feb 23 04:47:20 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:47:20 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:20 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:20 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:47:20 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:47:21 localhost ceph-mon[287329]: Reconfiguring osd.0 (monmap changed)... Feb 23 04:47:21 localhost ceph-mon[287329]: Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:47:21 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:21 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:21 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:47:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:21 localhost podman[294929]: Feb 23 04:47:21 localhost podman[294929]: 2026-02-23 09:47:21.380593388 +0000 UTC m=+0.073094934 container create 687b24828bc30c42352c0218a4b58daebbdea04119b590ba2c5f6a624abc8dbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_volhard, io.buildah.version=1.42.2, RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=1770267347, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:47:21 localhost systemd[1]: Started libpod-conmon-687b24828bc30c42352c0218a4b58daebbdea04119b590ba2c5f6a624abc8dbe.scope. Feb 23 04:47:21 localhost systemd[1]: Started libcrun container. Feb 23 04:47:21 localhost podman[294929]: 2026-02-23 09:47:21.445863083 +0000 UTC m=+0.138364639 container init 687b24828bc30c42352c0218a4b58daebbdea04119b590ba2c5f6a624abc8dbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_volhard, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=1770267347, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.component=rhceph-container) Feb 23 04:47:21 localhost podman[294929]: 2026-02-23 09:47:21.351990815 +0000 UTC m=+0.044492381 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:21 localhost podman[294929]: 2026-02-23 09:47:21.455712653 +0000 UTC m=+0.148214209 container start 687b24828bc30c42352c0218a4b58daebbdea04119b590ba2c5f6a624abc8dbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_volhard, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.42.2, architecture=x86_64, release=1770267347, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z) Feb 23 04:47:21 localhost podman[294929]: 2026-02-23 09:47:21.45592999 +0000 UTC m=+0.148431536 container attach 687b24828bc30c42352c0218a4b58daebbdea04119b590ba2c5f6a624abc8dbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_volhard, io.openshift.expose-services=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , version=7, release=1770267347, build-date=2026-02-09T10:25:24Z, name=rhceph, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, ceph=True) Feb 23 04:47:21 localhost nifty_volhard[294944]: 167 167 Feb 23 04:47:21 localhost systemd[1]: libpod-687b24828bc30c42352c0218a4b58daebbdea04119b590ba2c5f6a624abc8dbe.scope: Deactivated successfully. Feb 23 04:47:21 localhost podman[294929]: 2026-02-23 09:47:21.459612433 +0000 UTC m=+0.152113999 container died 687b24828bc30c42352c0218a4b58daebbdea04119b590ba2c5f6a624abc8dbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_volhard, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, maintainer=Guillaume Abrioux , GIT_BRANCH=main, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph) Feb 23 04:47:21 localhost podman[294949]: 2026-02-23 09:47:21.556324137 +0000 UTC m=+0.083619646 container remove 687b24828bc30c42352c0218a4b58daebbdea04119b590ba2c5f6a624abc8dbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_volhard, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:47:21 localhost systemd[1]: libpod-conmon-687b24828bc30c42352c0218a4b58daebbdea04119b590ba2c5f6a624abc8dbe.scope: Deactivated successfully. Feb 23 04:47:21 localhost systemd[1]: var-lib-containers-storage-overlay-cc06c5db4c1ce1ae927a62fb7770f912dcadf971cf6f658788ac333ed280784d-merged.mount: Deactivated successfully. Feb 23 04:47:21 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:47:21 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:47:21 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:47:21 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:47:21 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 23 04:47:21 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:21 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:21 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:21 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:47:21 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:47:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:47:21 localhost podman[294991]: 2026-02-23 09:47:21.950028344 +0000 UTC m=+0.086383350 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 04:47:21 localhost podman[294991]: 2026-02-23 09:47:21.98689695 +0000 UTC m=+0.123251956 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible) Feb 23 04:47:22 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:47:22 localhost ceph-mon[287329]: Reconfiguring osd.3 (monmap changed)... Feb 23 04:47:22 localhost ceph-mon[287329]: Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:47:22 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:22 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:22 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:22 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:22 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.27040 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005626460.localdomain", "target": ["mon-mgr", ""]}]: dispatch Feb 23 04:47:22 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:47:22 localhost ceph-mgr[285904]: [cephadm INFO root] Added label _no_schedule to host np0005626460.localdomain Feb 23 04:47:22 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005626460.localdomain Feb 23 04:47:22 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.138478) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840042138516, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 723, "num_deletes": 251, "total_data_size": 1044496, "memory_usage": 1057824, "flush_reason": "Manual Compaction"} Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840042144648, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 614951, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16652, "largest_seqno": 17370, "table_properties": {"data_size": 611089, "index_size": 1589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10138, "raw_average_key_size": 21, "raw_value_size": 603016, "raw_average_value_size": 1288, "num_data_blocks": 66, "num_entries": 468, "num_filter_entries": 468, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840030, "oldest_key_time": 1771840030, "file_creation_time": 1771840042, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 6247 microseconds, and 2590 cpu microseconds. Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.144727) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 614951 bytes OK Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.144747) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.147044) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.147065) EVENT_LOG_v1 {"time_micros": 1771840042147059, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.147085) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1040324, prev total WAL file size 1040324, number of live WAL files 2. Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.147897) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(600KB)], [24(17MB)] Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840042147978, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 19005834, "oldest_snapshot_seqno": -1} Feb 23 04:47:22 localhost ceph-mgr[285904]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005626460.localdomain Feb 23 04:47:22 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005626460.localdomain Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 10346 keys, 15653780 bytes, temperature: kUnknown Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840042226307, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 15653780, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15593192, "index_size": 33607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25925, "raw_key_size": 279211, "raw_average_key_size": 26, "raw_value_size": 15414502, "raw_average_value_size": 1489, "num_data_blocks": 1277, "num_entries": 10346, "num_filter_entries": 10346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771839874, "oldest_key_time": 0, "file_creation_time": 1771840042, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f63238e2-844d-4c49-b660-105bb635e407", "db_session_id": "YG0VANVTEI8CVHQGQH5D", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.226637) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 15653780 bytes Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.228488) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.4 rd, 199.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 17.5 +0.0 blob) out(14.9 +0.0 blob), read-write-amplify(56.4) write-amplify(25.5) OK, records in: 10870, records dropped: 524 output_compression: NoCompression Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.228524) EVENT_LOG_v1 {"time_micros": 1771840042228509, "job": 12, "event": "compaction_finished", "compaction_time_micros": 78418, "compaction_time_cpu_micros": 43826, "output_level": 6, "num_output_files": 1, "total_output_size": 15653780, "num_input_records": 10870, "num_output_records": 10346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840042228777, "job": 12, "event": "table_file_deletion", "file_number": 26} Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840042231321, "job": 12, "event": "table_file_deletion", "file_number": 24} Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.147621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.231392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.231397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.231399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.231400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:22 localhost ceph-mon[287329]: rocksdb: (Original Log Time 2026/02/23-09:47:22.231402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:47:22 localhost podman[295052]: Feb 23 04:47:22 localhost podman[295052]: 2026-02-23 09:47:22.43113915 +0000 UTC m=+0.072487674 container create 7d14534189c851ba2f0f7bdaeec3be510fefaa01cb58496f5571839f423dfe42 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cerf, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, distribution-scope=public, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux ) Feb 23 04:47:22 localhost systemd[1]: Started libpod-conmon-7d14534189c851ba2f0f7bdaeec3be510fefaa01cb58496f5571839f423dfe42.scope. Feb 23 04:47:22 localhost systemd[1]: Started libcrun container. Feb 23 04:47:22 localhost podman[295052]: 2026-02-23 09:47:22.401199976 +0000 UTC m=+0.042548500 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:22 localhost podman[295052]: 2026-02-23 09:47:22.505368488 +0000 UTC m=+0.146717012 container init 7d14534189c851ba2f0f7bdaeec3be510fefaa01cb58496f5571839f423dfe42 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cerf, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=rhceph-container, release=1770267347, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vendor=Red Hat, Inc.) Feb 23 04:47:22 localhost podman[295052]: 2026-02-23 09:47:22.520918213 +0000 UTC m=+0.162266727 container start 7d14534189c851ba2f0f7bdaeec3be510fefaa01cb58496f5571839f423dfe42 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cerf, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2026-02-09T10:25:24Z, ceph=True, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:47:22 localhost podman[295052]: 2026-02-23 09:47:22.521218683 +0000 UTC m=+0.162567207 container attach 7d14534189c851ba2f0f7bdaeec3be510fefaa01cb58496f5571839f423dfe42 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cerf, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, release=1770267347, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 04:47:22 localhost brave_cerf[295067]: 167 167 Feb 23 04:47:22 localhost systemd[1]: libpod-7d14534189c851ba2f0f7bdaeec3be510fefaa01cb58496f5571839f423dfe42.scope: Deactivated successfully. Feb 23 04:47:22 localhost podman[295052]: 2026-02-23 09:47:22.525254575 +0000 UTC m=+0.166603119 container died 7d14534189c851ba2f0f7bdaeec3be510fefaa01cb58496f5571839f423dfe42 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cerf, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=1770267347, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Guillaume Abrioux , architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7) Feb 23 04:47:22 localhost systemd[1]: var-lib-containers-storage-overlay-eda3fbc96604c0a9dbfa59da7acf0be2f94d7e62cc76bbc44cec4e839a9c0570-merged.mount: Deactivated successfully. Feb 23 04:47:22 localhost podman[295072]: 2026-02-23 09:47:22.624639981 +0000 UTC m=+0.086867295 container remove 7d14534189c851ba2f0f7bdaeec3be510fefaa01cb58496f5571839f423dfe42 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cerf, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, release=1770267347, distribution-scope=public, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:47:22 localhost systemd[1]: libpod-conmon-7d14534189c851ba2f0f7bdaeec3be510fefaa01cb58496f5571839f423dfe42.scope: Deactivated successfully. Feb 23 04:47:22 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:47:22 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:47:22 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:47:22 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:47:22 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:47:22 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:22 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:47:22 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:47:22 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:22 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:22 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:47:22 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:47:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:23 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:47:23 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:47:23 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:23 localhost ceph-mon[287329]: Added label _no_schedule to host np0005626460.localdomain Feb 23 04:47:23 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:23 localhost ceph-mon[287329]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005626460.localdomain Feb 23 04:47:23 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:23 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:23 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:23 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:23 localhost podman[295140]: Feb 23 04:47:23 localhost podman[295140]: 2026-02-23 09:47:23.365235725 +0000 UTC m=+0.077018274 container create df76cd20a0a70bb1a817a1453c75c5404fca15589b3e9e76e6125bd25ad65f5e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.buildah.version=1.42.2, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, name=rhceph) Feb 23 04:47:23 localhost systemd[1]: Started libpod-conmon-df76cd20a0a70bb1a817a1453c75c5404fca15589b3e9e76e6125bd25ad65f5e.scope. Feb 23 04:47:23 localhost systemd[1]: Started libcrun container. Feb 23 04:47:23 localhost podman[295140]: 2026-02-23 09:47:23.334037032 +0000 UTC m=+0.045819601 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:23 localhost podman[295140]: 2026-02-23 09:47:23.438021789 +0000 UTC m=+0.149804348 container init df76cd20a0a70bb1a817a1453c75c5404fca15589b3e9e76e6125bd25ad65f5e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:47:23 localhost podman[295140]: 2026-02-23 09:47:23.446348963 +0000 UTC m=+0.158131482 container start df76cd20a0a70bb1a817a1453c75c5404fca15589b3e9e76e6125bd25ad65f5e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, release=1770267347, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public) Feb 23 04:47:23 localhost laughing_hertz[295155]: 167 167 Feb 23 04:47:23 localhost podman[295140]: 2026-02-23 09:47:23.44656566 +0000 UTC m=+0.158348209 container attach df76cd20a0a70bb1a817a1453c75c5404fca15589b3e9e76e6125bd25ad65f5e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347) Feb 23 04:47:23 localhost systemd[1]: libpod-df76cd20a0a70bb1a817a1453c75c5404fca15589b3e9e76e6125bd25ad65f5e.scope: Deactivated successfully. Feb 23 04:47:23 localhost podman[295140]: 2026-02-23 09:47:23.451679796 +0000 UTC m=+0.163462375 container died df76cd20a0a70bb1a817a1453c75c5404fca15589b3e9e76e6125bd25ad65f5e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, ceph=True, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Feb 23 04:47:23 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.44247 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005626460.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Feb 23 04:47:23 localhost podman[295160]: 2026-02-23 09:47:23.534718912 +0000 UTC m=+0.071439053 container remove df76cd20a0a70bb1a817a1453c75c5404fca15589b3e9e76e6125bd25ad65f5e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, GIT_CLEAN=True, name=rhceph, RELEASE=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, ceph=True) Feb 23 04:47:23 localhost systemd[1]: libpod-conmon-df76cd20a0a70bb1a817a1453c75c5404fca15589b3e9e76e6125bd25ad65f5e.scope: Deactivated successfully. Feb 23 04:47:23 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:47:23 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:47:23 localhost systemd[1]: var-lib-containers-storage-overlay-2431f74edacc1b436150a4cd061c6363477061550b2fc7d5abb4cfc24086a76c-merged.mount: Deactivated successfully. Feb 23 04:47:23 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005626465 (monmap changed)... Feb 23 04:47:23 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005626465 (monmap changed)... Feb 23 04:47:23 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:47:23 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:47:23 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:47:23 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:47:23 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:23 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:23 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:47:23 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:47:23 localhost sshd[295212]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:47:24 localhost ceph-mon[287329]: mon.np0005626465@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:47:24 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:47:24 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:47:24 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:24 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:24 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:47:24 localhost podman[295232]: Feb 23 04:47:24 localhost podman[295232]: 2026-02-23 09:47:24.206114332 +0000 UTC m=+0.068098391 container create afbbfb32bce712f41227406ef7a56e959c5b9c5ec546b455da9137f4ff9cdaa0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_chaplygin, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, build-date=2026-02-09T10:25:24Z, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=1770267347, version=7, vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:47:24 localhost systemd[1]: Started libpod-conmon-afbbfb32bce712f41227406ef7a56e959c5b9c5ec546b455da9137f4ff9cdaa0.scope. Feb 23 04:47:24 localhost systemd[1]: Started libcrun container. Feb 23 04:47:24 localhost podman[295232]: 2026-02-23 09:47:24.275794261 +0000 UTC m=+0.137778320 container init afbbfb32bce712f41227406ef7a56e959c5b9c5ec546b455da9137f4ff9cdaa0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_chaplygin, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.buildah.version=1.42.2, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:47:24 localhost podman[295232]: 2026-02-23 09:47:24.180399387 +0000 UTC m=+0.042383476 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:24 localhost podman[295232]: 2026-02-23 09:47:24.283632551 +0000 UTC m=+0.145616620 container start afbbfb32bce712f41227406ef7a56e959c5b9c5ec546b455da9137f4ff9cdaa0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_chaplygin, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=1770267347, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7) Feb 23 04:47:24 localhost podman[295232]: 2026-02-23 09:47:24.283842927 +0000 UTC m=+0.145826996 container attach afbbfb32bce712f41227406ef7a56e959c5b9c5ec546b455da9137f4ff9cdaa0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_chaplygin, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , RELEASE=main, name=rhceph, vendor=Red Hat, Inc., version=7, ceph=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:47:24 localhost systemd[1]: libpod-afbbfb32bce712f41227406ef7a56e959c5b9c5ec546b455da9137f4ff9cdaa0.scope: Deactivated successfully. Feb 23 04:47:24 localhost vigilant_chaplygin[295247]: 167 167 Feb 23 04:47:24 localhost podman[295232]: 2026-02-23 09:47:24.285805426 +0000 UTC m=+0.147789485 container died afbbfb32bce712f41227406ef7a56e959c5b9c5ec546b455da9137f4ff9cdaa0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_chaplygin, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64) Feb 23 04:47:24 localhost podman[295252]: 2026-02-23 09:47:24.378034794 +0000 UTC m=+0.079220941 container remove afbbfb32bce712f41227406ef7a56e959c5b9c5ec546b455da9137f4ff9cdaa0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_chaplygin, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, version=7, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, vcs-type=git, name=rhceph, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:47:24 localhost systemd[1]: libpod-conmon-afbbfb32bce712f41227406ef7a56e959c5b9c5ec546b455da9137f4ff9cdaa0.scope: Deactivated successfully. Feb 23 04:47:24 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:47:24 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:47:24 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:47:24 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:47:24 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 23 04:47:24 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:24 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:24 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:24 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:47:24 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:47:24 localhost systemd[1]: var-lib-containers-storage-overlay-9ec80f2013980596cde439238b8b84ce0d83b46829a84b2d043055d6ece22977-merged.mount: Deactivated successfully. Feb 23 04:47:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.34354 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005626460.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Feb 23 04:47:24 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 23 04:47:24 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain"} v 0) Feb 23 04:47:24 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain"} : dispatch Feb 23 04:47:24 localhost ceph-mgr[285904]: [cephadm INFO root] Removed host np0005626460.localdomain Feb 23 04:47:24 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Removed host np0005626460.localdomain Feb 23 04:47:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:25 localhost ceph-mon[287329]: Reconfiguring mon.np0005626465 (monmap changed)... Feb 23 04:47:25 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:47:25 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:25 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:25 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:25 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:25 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:25 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain"} : dispatch Feb 23 04:47:25 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain"} : dispatch Feb 23 04:47:25 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain"}]': finished Feb 23 04:47:25 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:47:25 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:47:25 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Feb 23 04:47:25 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Feb 23 04:47:25 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Feb 23 04:47:25 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:47:25 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:25 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:25 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:47:25 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:47:26 localhost ceph-mon[287329]: Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:47:26 localhost ceph-mon[287329]: Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:47:26 localhost ceph-mon[287329]: Removed host np0005626460.localdomain Feb 23 04:47:26 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:26 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:26 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:47:26 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:47:26 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:47:26 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Feb 23 04:47:26 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Feb 23 04:47:26 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Feb 23 04:47:26 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:47:26 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:26 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:26 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:47:26 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:47:26 localhost sshd[295269]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:47:26 localhost systemd-logind[759]: New session 66 of user tripleo-admin. Feb 23 04:47:26 localhost systemd[1]: Created slice User Slice of UID 1003. Feb 23 04:47:26 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Feb 23 04:47:26 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Feb 23 04:47:26 localhost systemd[1]: Starting User Manager for UID 1003... Feb 23 04:47:26 localhost systemd[295273]: Queued start job for default target Main User Target. Feb 23 04:47:26 localhost systemd[295273]: Created slice User Application Slice. Feb 23 04:47:26 localhost systemd[295273]: Started Mark boot as successful after the user session has run 2 minutes. Feb 23 04:47:26 localhost systemd[295273]: Started Daily Cleanup of User's Temporary Directories. Feb 23 04:47:26 localhost systemd[295273]: Reached target Paths. Feb 23 04:47:26 localhost systemd[295273]: Reached target Timers. Feb 23 04:47:26 localhost systemd[295273]: Starting D-Bus User Message Bus Socket... Feb 23 04:47:26 localhost systemd[295273]: Starting Create User's Volatile Files and Directories... Feb 23 04:47:26 localhost systemd[295273]: Listening on D-Bus User Message Bus Socket. Feb 23 04:47:26 localhost systemd[295273]: Finished Create User's Volatile Files and Directories. Feb 23 04:47:26 localhost systemd[295273]: Reached target Sockets. Feb 23 04:47:26 localhost systemd[295273]: Reached target Basic System. Feb 23 04:47:26 localhost systemd[295273]: Reached target Main User Target. Feb 23 04:47:26 localhost systemd[295273]: Startup finished in 149ms. Feb 23 04:47:26 localhost systemd[1]: Started User Manager for UID 1003. Feb 23 04:47:26 localhost systemd[1]: Started Session 66 of User tripleo-admin. Feb 23 04:47:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:27 localhost ceph-mon[287329]: Reconfiguring osd.1 (monmap changed)... Feb 23 04:47:27 localhost ceph-mon[287329]: Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:47:27 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:27 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:27 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:47:27 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:47:27 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:47:27 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:47:27 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:47:27 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 23 04:47:27 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:27 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:27 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:27 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:47:27 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:47:27 localhost python3[295416]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.104/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 23 04:47:28 localhost ceph-mon[287329]: Reconfiguring osd.4 (monmap changed)... Feb 23 04:47:28 localhost ceph-mon[287329]: Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:47:28 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:28 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:28 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:28 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:28 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:47:28 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:47:28 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:47:28 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:47:28 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 23 04:47:28 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:28 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 23 04:47:28 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr services"} : dispatch Feb 23 04:47:28 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:28 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:28 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:47:28 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:47:28 localhost python3[295562]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.104/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:47:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:47:29 localhost python3[295707]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.104 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:47:29 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005626466 (monmap changed)... Feb 23 04:47:29 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005626466 (monmap changed)... Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 23 04:47:29 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 23 04:47:29 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:29 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:29 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:47:29 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:47:29 localhost ceph-mon[287329]: Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:47:29 localhost ceph-mon[287329]: Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:47:29 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:29 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:29 localhost ceph-mon[287329]: Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:47:29 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:29 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:29 localhost ceph-mon[287329]: Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:47:29 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:29 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:29 localhost ceph-mon[287329]: Reconfiguring mon.np0005626466 (monmap changed)... Feb 23 04:47:29 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:47:29 localhost ceph-mon[287329]: Reconfiguring daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:29 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:47:29 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:29 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:47:30 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev a4c84d0b-5aff-449c-816c-f003ad75561a (Updating node-proxy deployment (+4 -> 4)) Feb 23 04:47:30 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev a4c84d0b-5aff-449c-816c-f003ad75561a (Updating node-proxy deployment (+4 -> 4)) Feb 23 04:47:30 localhost ceph-mgr[285904]: [progress INFO root] Completed event a4c84d0b-5aff-449c-816c-f003ad75561a (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Feb 23 04:47:30 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:47:30 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:47:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:47:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:47:30 localhost podman[295728]: 2026-02-23 09:47:30.322453015 +0000 UTC m=+0.062541271 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:47:30 localhost podman[295727]: 2026-02-23 09:47:30.38645426 +0000 UTC m=+0.125525306 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 04:47:30 localhost podman[295727]: 2026-02-23 09:47:30.398288761 +0000 UTC m=+0.137359897 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:47:30 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:47:30 localhost podman[295728]: 2026-02-23 09:47:30.413032242 +0000 UTC m=+0.153120538 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute) Feb 23 04:47:30 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:47:30 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:30 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:30 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:30 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:31 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:47:31 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:47:31 localhost openstack_network_exporter[243519]: ERROR 09:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:47:31 localhost openstack_network_exporter[243519]: Feb 23 04:47:31 localhost openstack_network_exporter[243519]: ERROR 09:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:47:31 localhost openstack_network_exporter[243519]: Feb 23 04:47:32 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.34385 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Feb 23 04:47:32 localhost ceph-mgr[285904]: [cephadm INFO root] Saving service mon spec with placement label:mon Feb 23 04:47:32 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Feb 23 04:47:32 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 23 04:47:32 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:47:32 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:47:32 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:47:32 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:32 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:47:32 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 1332cc31-6329-4bbe-9220-09de9cca6717 (Updating node-proxy deployment (+4 -> 4)) Feb 23 04:47:32 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 1332cc31-6329-4bbe-9220-09de9cca6717 (Updating node-proxy deployment (+4 -> 4)) Feb 23 04:47:32 localhost ceph-mgr[285904]: [progress INFO root] Completed event 1332cc31-6329-4bbe-9220-09de9cca6717 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Feb 23 04:47:32 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:47:32 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:47:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:33 localhost ceph-mon[287329]: Saving service mon spec with placement label:mon Feb 23 04:47:33 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:33 localhost ceph-mon[287329]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:33 localhost ceph-mon[287329]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:33 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.34362 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005626465", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Feb 23 04:47:34 localhost ceph-mon[287329]: mon.np0005626465@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:47:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.34397 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005626465"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Feb 23 04:47:34 localhost ceph-mgr[285904]: [cephadm INFO root] Remove daemons mon.np0005626465 Feb 23 04:47:34 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005626465 Feb 23 04:47:34 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "quorum_status"} v 0) Feb 23 04:47:34 localhost ceph-mon[287329]: log_channel(audit) log [DBG] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "quorum_status"} : dispatch Feb 23 04:47:34 localhost ceph-mgr[285904]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005626465: new quorum should be ['np0005626461', 'np0005626466', 'np0005626463'] (from ['np0005626461', 'np0005626466', 'np0005626463']) Feb 23 04:47:34 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005626465: new quorum should be ['np0005626461', 'np0005626466', 'np0005626463'] (from ['np0005626461', 'np0005626466', 'np0005626463']) Feb 23 04:47:34 localhost ceph-mgr[285904]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005626465 from monmap... Feb 23 04:47:34 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Removing monitor np0005626465 from monmap... Feb 23 04:47:34 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e10 handle_command mon_command({"prefix": "mon rm", "name": "np0005626465"} v 0) Feb 23 04:47:34 localhost ceph-mon[287329]: log_channel(audit) log [INF] : from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon rm", "name": "np0005626465"} : dispatch Feb 23 04:47:34 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005626465 from np0005626465.localdomain -- ports [] Feb 23 04:47:34 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005626465 from np0005626465.localdomain -- ports [] Feb 23 04:47:34 localhost ceph-mon[287329]: mon.np0005626465@2(peon) e11 removed from monmap, suicide. Feb 23 04:47:34 localhost ceph-mgr[285904]: client.34308 ms_handle_reset on v2:172.18.0.107:3300/0 Feb 23 04:47:34 localhost podman[295816]: 2026-02-23 09:47:34.963917241 +0000 UTC m=+0.058900090 container died 287dcf2f52ac7b0ed8c97be7c6f99602c1d420eb44e0956a4e6532f5eecf9db9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mon-np0005626465, io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, CEPH_POINT_RELEASE=, architecture=x86_64) Feb 23 04:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-a62f72e87ac8094e8f004778b4c585735df79e1ee74bc56fd93f47429037219a-merged.mount: Deactivated successfully. Feb 23 04:47:35 localhost podman[295816]: 2026-02-23 09:47:35.004159431 +0000 UTC m=+0.099142220 container remove 287dcf2f52ac7b0ed8c97be7c6f99602c1d420eb44e0956a4e6532f5eecf9db9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mon-np0005626465, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, distribution-scope=public) Feb 23 04:47:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:47:35 localhost podman[295917]: 2026-02-23 09:47:35.592328067 +0000 UTC m=+0.085188773 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:47:35 localhost podman[295917]: 2026-02-23 09:47:35.601945092 +0000 UTC m=+0.094805808 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:47:35 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:47:35 localhost systemd[1]: ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46@mon.np0005626465.service: Deactivated successfully. Feb 23 04:47:35 localhost systemd[1]: Stopped Ceph mon.np0005626465 for f1fea371-cb69-578d-a3d0-b5c472a84b46. Feb 23 04:47:35 localhost systemd[1]: ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46@mon.np0005626465.service: Consumed 8.418s CPU time. Feb 23 04:47:35 localhost systemd[1]: Reloading. Feb 23 04:47:36 localhost systemd-rc-local-generator[295995]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:47:36 localhost systemd-sysv-generator[295999]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:47:36 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:36 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:36 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:36 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:47:36 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:36 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:36 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:36 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:36 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:47:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:39 localhost ceph-mds[284726]: mds.beacon.mds.np0005626465.drvnoy missed beacon ack from the monitors Feb 23 04:47:39 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:39 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:39 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:39 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:39 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:39 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:39 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:39 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:40 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:40 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:40 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:40 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:40 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:40 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:40 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:40 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:41 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 48ad1fb6-c804-405f-8a14-5792366de5b8 (Updating node-proxy deployment (+4 -> 4)) Feb 23 04:47:41 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 48ad1fb6-c804-405f-8a14-5792366de5b8 (Updating node-proxy deployment (+4 -> 4)) Feb 23 04:47:41 localhost ceph-mgr[285904]: [progress INFO root] Completed event 48ad1fb6-c804-405f-8a14-5792366de5b8 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Feb 23 04:47:41 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005626461.lrfquh (monmap changed)... Feb 23 04:47:41 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005626461.lrfquh (monmap changed)... Feb 23 04:47:41 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005626461.lrfquh on np0005626461.localdomain Feb 23 04:47:41 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005626461.lrfquh on np0005626461.localdomain Feb 23 04:47:42 localhost podman[241086]: time="2026-02-23T09:47:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:47:42 localhost podman[241086]: @ - - [23/Feb/2026:09:47:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 151747 "" "Go-http-client/1.1" Feb 23 04:47:42 localhost podman[241086]: @ - - [23/Feb/2026:09:47:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17310 "" "Go-http-client/1.1" Feb 23 04:47:42 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005626461 (monmap changed)... Feb 23 04:47:42 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005626461 (monmap changed)... Feb 23 04:47:42 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005626461 on np0005626461.localdomain Feb 23 04:47:42 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005626461 on np0005626461.localdomain Feb 23 04:47:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:43 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:47:43 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:47:43 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:47:43 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:47:44 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Feb 23 04:47:44 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Feb 23 04:47:44 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:47:44 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:47:44 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:47:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:45 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Feb 23 04:47:45 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Feb 23 04:47:45 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:47:45 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:47:46 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:47:46 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:47:46 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:47:46 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:47:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:47:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:47:47 localhost podman[296344]: 2026-02-23 09:47:47.011837062 +0000 UTC m=+0.086535495 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:47:47 localhost podman[296344]: 2026-02-23 09:47:47.024936852 +0000 UTC m=+0.099635255 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:47:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:47 localhost systemd[1]: tmp-crun.yu3zgd.mount: Deactivated successfully. Feb 23 04:47:47 localhost podman[296345]: 2026-02-23 09:47:47.065775849 +0000 UTC m=+0.137617635 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.7, vcs-type=git, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9/ubi-minimal, release=1770267347, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 04:47:47 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:47:47 localhost podman[296345]: 2026-02-23 09:47:47.100640995 +0000 UTC m=+0.172482801 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9) Feb 23 04:47:47 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:47:47 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:47:47 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:47:47 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.34402 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005626465.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch Feb 23 04:47:47 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:47:47 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:47:47 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:47:47 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:47:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:47:48.305 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:47:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:47:48.306 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:47:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:47:48.306 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:47:48 localhost podman[296466]: Feb 23 04:47:48 localhost podman[296466]: 2026-02-23 09:47:48.45574479 +0000 UTC m=+0.078737997 container create ef1b4c5d62342c7e0498869b90890ee50cce5c4aeb544b8be905bb4d704639a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wilson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., release=1770267347, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:47:48 localhost systemd[1]: Started libpod-conmon-ef1b4c5d62342c7e0498869b90890ee50cce5c4aeb544b8be905bb4d704639a4.scope. Feb 23 04:47:48 localhost systemd[1]: Started libcrun container. Feb 23 04:47:48 localhost podman[296466]: 2026-02-23 09:47:48.422910067 +0000 UTC m=+0.045903274 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:48 localhost podman[296466]: 2026-02-23 09:47:48.52808504 +0000 UTC m=+0.151078237 container init ef1b4c5d62342c7e0498869b90890ee50cce5c4aeb544b8be905bb4d704639a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wilson, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, RELEASE=main, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:47:48 localhost systemd[1]: tmp-crun.zEZ6sg.mount: Deactivated successfully. Feb 23 04:47:48 localhost podman[296466]: 2026-02-23 09:47:48.541409777 +0000 UTC m=+0.164402974 container start ef1b4c5d62342c7e0498869b90890ee50cce5c4aeb544b8be905bb4d704639a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wilson, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, release=1770267347, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:47:48 localhost podman[296466]: 2026-02-23 09:47:48.542868231 +0000 UTC m=+0.165861478 container attach ef1b4c5d62342c7e0498869b90890ee50cce5c4aeb544b8be905bb4d704639a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wilson, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, version=7) Feb 23 04:47:48 localhost boring_wilson[296482]: 167 167 Feb 23 04:47:48 localhost systemd[1]: libpod-ef1b4c5d62342c7e0498869b90890ee50cce5c4aeb544b8be905bb4d704639a4.scope: Deactivated successfully. Feb 23 04:47:48 localhost podman[296466]: 2026-02-23 09:47:48.546779561 +0000 UTC m=+0.169772788 container died ef1b4c5d62342c7e0498869b90890ee50cce5c4aeb544b8be905bb4d704639a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wilson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, name=rhceph, version=7) Feb 23 04:47:48 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:47:48 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:47:48 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:47:48 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:47:48 localhost podman[296487]: 2026-02-23 09:47:48.648364194 +0000 UTC m=+0.093661952 container remove ef1b4c5d62342c7e0498869b90890ee50cce5c4aeb544b8be905bb4d704639a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wilson, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public) Feb 23 04:47:48 localhost systemd[1]: libpod-conmon-ef1b4c5d62342c7e0498869b90890ee50cce5c4aeb544b8be905bb4d704639a4.scope: Deactivated successfully. Feb 23 04:47:48 localhost podman[296519]: Feb 23 04:47:48 localhost podman[296519]: 2026-02-23 09:47:48.762559063 +0000 UTC m=+0.077554171 container create 2efac6dc92e3574dce3023d6b4c1e388e2215e88abd6b5b5437b18ab92ec6022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_kapitsa, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, io.buildah.version=1.42.2, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, architecture=x86_64) Feb 23 04:47:48 localhost systemd[1]: Started libpod-conmon-2efac6dc92e3574dce3023d6b4c1e388e2215e88abd6b5b5437b18ab92ec6022.scope. Feb 23 04:47:48 localhost systemd[1]: Started libcrun container. Feb 23 04:47:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c5bd22bb8daa848dbc7b5729a7a5cda9df59449337d190f7dc064f2ea6e783/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Feb 23 04:47:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c5bd22bb8daa848dbc7b5729a7a5cda9df59449337d190f7dc064f2ea6e783/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Feb 23 04:47:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c5bd22bb8daa848dbc7b5729a7a5cda9df59449337d190f7dc064f2ea6e783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 04:47:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c5bd22bb8daa848dbc7b5729a7a5cda9df59449337d190f7dc064f2ea6e783/merged/var/lib/ceph/mon/ceph-np0005626465 supports timestamps until 2038 (0x7fffffff) Feb 23 04:47:48 localhost podman[296519]: 2026-02-23 09:47:48.823621318 +0000 UTC m=+0.138616436 container init 2efac6dc92e3574dce3023d6b4c1e388e2215e88abd6b5b5437b18ab92ec6022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_kapitsa, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1770267347, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=) Feb 23 04:47:48 localhost podman[296519]: 2026-02-23 09:47:48.731978788 +0000 UTC m=+0.046973946 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:48 localhost podman[296519]: 2026-02-23 09:47:48.833832519 +0000 UTC m=+0.148827627 container start 2efac6dc92e3574dce3023d6b4c1e388e2215e88abd6b5b5437b18ab92ec6022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_kapitsa, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.42.2, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, GIT_BRANCH=main, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:47:48 localhost podman[296519]: 2026-02-23 09:47:48.834215501 +0000 UTC m=+0.149210609 container attach 2efac6dc92e3574dce3023d6b4c1e388e2215e88abd6b5b5437b18ab92ec6022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_kapitsa, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vcs-type=git, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, version=7, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Feb 23 04:47:48 localhost systemd[1]: libpod-2efac6dc92e3574dce3023d6b4c1e388e2215e88abd6b5b5437b18ab92ec6022.scope: Deactivated successfully. Feb 23 04:47:48 localhost podman[296519]: 2026-02-23 09:47:48.928753369 +0000 UTC m=+0.243748487 container died 2efac6dc92e3574dce3023d6b4c1e388e2215e88abd6b5b5437b18ab92ec6022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_kapitsa, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True) Feb 23 04:47:49 localhost podman[296579]: 2026-02-23 09:47:49.020578334 +0000 UTC m=+0.081299744 container remove 2efac6dc92e3574dce3023d6b4c1e388e2215e88abd6b5b5437b18ab92ec6022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_kapitsa, release=1770267347, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7) Feb 23 04:47:49 localhost systemd[1]: libpod-conmon-2efac6dc92e3574dce3023d6b4c1e388e2215e88abd6b5b5437b18ab92ec6022.scope: Deactivated successfully. Feb 23 04:47:49 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:47:49 Feb 23 04:47:49 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:47:49 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 04:47:49 localhost ceph-mgr[285904]: [balancer INFO root] pools ['backups', 'manila_data', '.mgr', 'manila_metadata', 'images', 'vms', 'volumes'] Feb 23 04:47:49 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 04:47:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:49 localhost systemd[1]: Reloading. Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:47:49 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.1810441094360693e-06 of space, bias 4.0, pg target 0.001741927228736274 quantized to 16 (current 16) Feb 23 04:47:49 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:47:49 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:47:49 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:47:49 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:47:49 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:47:49 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:47:49 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:47:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:47:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:47:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:47:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:47:49 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:47:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:47:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:47:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:47:49 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:47:49 localhost systemd-rc-local-generator[296631]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:47:49 localhost systemd-sysv-generator[296635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: var-lib-containers-storage-overlay-6bd32a17008de90ac1055ca7ca7786be14d4e81b5a35b1451ce0cacaa18f3983-merged.mount: Deactivated successfully. Feb 23 04:47:49 localhost systemd[1]: Reloading. Feb 23 04:47:49 localhost systemd-rc-local-generator[296670]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 04:47:49 localhost systemd-sysv-generator[296675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 23 04:47:49 localhost systemd[1]: Starting Ceph mon.np0005626465 for f1fea371-cb69-578d-a3d0-b5c472a84b46... Feb 23 04:47:50 localhost podman[296737]: Feb 23 04:47:50 localhost podman[296737]: 2026-02-23 09:47:50.147964323 +0000 UTC m=+0.077411995 container create 352d16d9927e2199e037102170ac017c36500110c1060a7dfd80df2a8ada899f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mon-np0005626465, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, architecture=x86_64, distribution-scope=public, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347) Feb 23 04:47:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9a37e8b57a50a192b1a222b047fa04d019da7c8474197841368f6f4e249e28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 23 04:47:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9a37e8b57a50a192b1a222b047fa04d019da7c8474197841368f6f4e249e28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 23 04:47:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9a37e8b57a50a192b1a222b047fa04d019da7c8474197841368f6f4e249e28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 23 04:47:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9a37e8b57a50a192b1a222b047fa04d019da7c8474197841368f6f4e249e28/merged/var/lib/ceph/mon/ceph-np0005626465 supports timestamps until 2038 (0x7fffffff) Feb 23 04:47:50 localhost podman[296737]: 2026-02-23 09:47:50.202034066 +0000 UTC m=+0.131481748 container init 352d16d9927e2199e037102170ac017c36500110c1060a7dfd80df2a8ada899f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mon-np0005626465, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph) Feb 23 04:47:50 localhost podman[296737]: 2026-02-23 09:47:50.21233783 +0000 UTC m=+0.141785482 container start 352d16d9927e2199e037102170ac017c36500110c1060a7dfd80df2a8ada899f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mon-np0005626465, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, architecture=x86_64, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux ) Feb 23 04:47:50 localhost bash[296737]: 352d16d9927e2199e037102170ac017c36500110c1060a7dfd80df2a8ada899f Feb 23 04:47:50 localhost podman[296737]: 2026-02-23 09:47:50.115248675 +0000 UTC m=+0.044696357 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:50 localhost systemd[1]: Started Ceph mon.np0005626465 for f1fea371-cb69-578d-a3d0-b5c472a84b46. Feb 23 04:47:50 localhost ceph-mon[296755]: set uid:gid to 167:167 (ceph:ceph) Feb 23 04:47:50 localhost ceph-mon[296755]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mon, pid 2 Feb 23 04:47:50 localhost ceph-mon[296755]: pidfile_write: ignore empty --pid-file Feb 23 04:47:50 localhost ceph-mon[296755]: load: jerasure load: lrc Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: RocksDB version: 7.9.2 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Git sha 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: DB SUMMARY Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: DB Session ID: Q93LGWE7XWLY0N7QX9GA Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: CURRENT file: CURRENT Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: IDENTITY file: IDENTITY Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005626465/store.db dir, Total Num: 0, files: Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005626465/store.db: 000004.log size: 761 ; Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.error_if_exists: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.create_if_missing: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.paranoid_checks: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.env: 0x564eaf13ea20 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.fs: PosixFileSystem Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.info_log: 0x564eb1554d20 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_file_opening_threads: 16 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.statistics: (nil) Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.use_fsync: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_log_file_size: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.log_file_time_to_roll: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.keep_log_file_num: 1000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.recycle_log_file_num: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.allow_fallocate: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.allow_mmap_reads: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.allow_mmap_writes: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.use_direct_reads: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.create_missing_column_families: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.db_log_dir: Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.wal_dir: Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.table_cache_numshardbits: 6 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.advise_random_on_open: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.db_write_buffer_size: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.write_buffer_manager: 0x564eb1565540 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.use_adaptive_mutex: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.rate_limiter: (nil) Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.wal_recovery_mode: 2 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.enable_thread_tracking: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.enable_pipelined_write: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.unordered_write: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.row_cache: None Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.wal_filter: None Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.allow_ingest_behind: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.two_write_queues: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.manual_wal_flush: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.wal_compression: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.atomic_flush: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.persist_stats_to_disk: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.log_readahead_size: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.best_efforts_recovery: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.allow_data_in_errors: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.db_host_id: __hostname__ Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.enforce_single_del_contracts: true Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_background_jobs: 2 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_background_compactions: -1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_subcompactions: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.delayed_write_rate : 16777216 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_total_wal_size: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.stats_dump_period_sec: 600 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.stats_persist_period_sec: 600 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_open_files: -1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bytes_per_sync: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_readahead_size: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_background_flushes: -1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Compression algorithms supported: Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: #011kZSTD supported: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: #011kXpressCompression supported: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: #011kBZip2Compression supported: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: #011kLZ4Compression supported: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: #011kZlibCompression supported: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: #011kSnappyCompression supported: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: DMutex implementation: pthread_mutex_t Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005626465/store.db/MANIFEST-000005 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.merge_operator: Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_filter: None Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_filter_factory: None Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.sst_partitioner_factory: None Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.memtable_factory: SkipListFactory Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.table_factory: BlockBasedTable Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564eb1554980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564eb1551350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.write_buffer_size: 33554432 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_write_buffer_number: 2 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compression: NoCompression Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bottommost_compression: Disabled Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.prefix_extractor: nullptr Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.num_levels: 7 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compression_opts.window_bits: -14 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compression_opts.level: 32767 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compression_opts.strategy: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compression_opts.enabled: false Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.target_file_size_base: 67108864 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.target_file_size_multiplier: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_bytes_for_level_base: 268435456 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.arena_block_size: 1048576 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.disable_auto_compactions: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.table_properties_collectors: Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.inplace_update_support: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.memtable_huge_page_size: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.bloom_locality: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.max_successive_merges: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.paranoid_file_checks: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.force_consistency_checks: 1 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.report_bg_io_stats: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.ttl: 2592000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.enable_blob_files: false Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.min_blob_size: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.blob_file_size: 268435456 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.blob_compression_type: NoCompression Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.enable_blob_garbage_collection: false Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.blob_file_starting_level: 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005626465/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 99129c6f-4568-4fd0-9cbc-0028c2eeda30 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840070271613, "job": 1, "event": "recovery_started", "wal_files": [4]} Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840070274239, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840070274369, "job": 1, "event": "recovery_finished"} Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564eb1578e00 Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: DB pointer 0x564eb166e000 Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465 does not exist in monmap, will attempt to join an existing cluster Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:47:50 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.84 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Sum 1/0 1.84 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564eb1551350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] ** Feb 23 04:47:50 localhost ceph-mon[296755]: using public_addr v2:172.18.0.104:0/0 -> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] Feb 23 04:47:50 localhost ceph-mon[296755]: starting mon.np0005626465 rank -1 at public addrs [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] at bind addrs [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005626465 fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465@-1(???) e0 preinit fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing) e11 sync_obtain_latest_monmap Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing) e11 sync_obtain_latest_monmap obtained monmap e11 Feb 23 04:47:50 localhost podman[296781]: Feb 23 04:47:50 localhost podman[296781]: 2026-02-23 09:47:50.363552759 +0000 UTC m=+0.078016464 container create 334daab0b9f68a641014dbe133d2ba528cf6ca60ce3cdd8214a98720d3e1f27c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_colden, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 04:47:50 localhost systemd[1]: Started libpod-conmon-334daab0b9f68a641014dbe133d2ba528cf6ca60ce3cdd8214a98720d3e1f27c.scope. Feb 23 04:47:50 localhost systemd[1]: Started libcrun container. Feb 23 04:47:50 localhost podman[296781]: 2026-02-23 09:47:50.333981176 +0000 UTC m=+0.048444911 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:50 localhost podman[296781]: 2026-02-23 09:47:50.442838272 +0000 UTC m=+0.157301977 container init 334daab0b9f68a641014dbe133d2ba528cf6ca60ce3cdd8214a98720d3e1f27c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_colden, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, vendor=Red Hat, Inc., release=1770267347, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True) Feb 23 04:47:50 localhost podman[296781]: 2026-02-23 09:47:50.454358994 +0000 UTC m=+0.168822729 container start 334daab0b9f68a641014dbe133d2ba528cf6ca60ce3cdd8214a98720d3e1f27c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_colden, GIT_CLEAN=True, vcs-type=git, ceph=True, build-date=2026-02-09T10:25:24Z, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , distribution-scope=public) Feb 23 04:47:50 localhost podman[296781]: 2026-02-23 09:47:50.455493628 +0000 UTC m=+0.169957373 container attach 334daab0b9f68a641014dbe133d2ba528cf6ca60ce3cdd8214a98720d3e1f27c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_colden, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, release=1770267347, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:47:50 localhost quirky_colden[296813]: 167 167 Feb 23 04:47:50 localhost systemd[1]: libpod-334daab0b9f68a641014dbe133d2ba528cf6ca60ce3cdd8214a98720d3e1f27c.scope: Deactivated successfully. Feb 23 04:47:50 localhost podman[296781]: 2026-02-23 09:47:50.458582362 +0000 UTC m=+0.173046087 container died 334daab0b9f68a641014dbe133d2ba528cf6ca60ce3cdd8214a98720d3e1f27c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_colden, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git) Feb 23 04:47:50 localhost systemd[1]: var-lib-containers-storage-overlay-b3c32d1d132bd6483d6981a20946f64122166e8768816fdcd784349c752c980d-merged.mount: Deactivated successfully. Feb 23 04:47:50 localhost podman[296819]: 2026-02-23 09:47:50.566691135 +0000 UTC m=+0.093875999 container remove 334daab0b9f68a641014dbe133d2ba528cf6ca60ce3cdd8214a98720d3e1f27c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_colden, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, vcs-type=git, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:47:50 localhost systemd[1]: libpod-conmon-334daab0b9f68a641014dbe133d2ba528cf6ca60ce3cdd8214a98720d3e1f27c.scope: Deactivated successfully. Feb 23 04:47:50 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Feb 23 04:47:50 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Feb 23 04:47:50 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:47:50 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing).mds e17 new map Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing).mds e17 print_map#012e17#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01116#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-23T07:57:46.097663+0000#012modified#0112026-02-23T09:43:29.529267+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01179#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26518}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26518 members: 26518#012[mds.mds.np0005626463.qcthuc{0:26518} state up:active seq 13 addr [v2:172.18.0.106:6808/2515508693,v1:172.18.0.106:6809/2515508693] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005626465.drvnoy{-1:26498} state up:standby seq 1 addr [v2:172.18.0.107:6808/2939113664,v1:172.18.0.107:6809/2939113664] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005626466.vaywlp{-1:26506} state up:standby seq 1 addr [v2:172.18.0.108:6808/2035422599,v1:172.18.0.108:6809/2035422599] compat {c=[1],r=[1],i=[17ff]}] Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing).osd e83 crush map has features 3314933000854323200, adjusting msgr requires Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing).osd e83 crush map has features 432629239337189376, adjusting msgr requires Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing).osd e83 crush map has features 432629239337189376, adjusting msgr requires Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing).osd e83 crush map has features 432629239337189376, adjusting msgr requires Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring osd.4 (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring mon.np0005626466 (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: Saving service mon spec with placement label:mon Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: Remove daemons mon.np0005626465 Feb 23 04:47:50 localhost ceph-mon[296755]: Safe to remove mon.np0005626465: new quorum should be ['np0005626461', 'np0005626466', 'np0005626463'] (from ['np0005626461', 'np0005626466', 'np0005626463']) Feb 23 04:47:50 localhost ceph-mon[296755]: Removing monitor np0005626465 from monmap... Feb 23 04:47:50 localhost ceph-mon[296755]: Removing daemon mon.np0005626465 from np0005626465.localdomain -- ports [] Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626461 calling monitor election Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626466 calling monitor election Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626463 calling monitor election Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626461 is new leader, mons np0005626461,np0005626466 in quorum (ranks 0,1) Feb 23 04:47:50 localhost ceph-mon[296755]: overall HEALTH_OK Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626461 calling monitor election Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626466 calling monitor election Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626461 is new leader, mons np0005626461,np0005626466,np0005626463 in quorum (ranks 0,1,2) Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: overall HEALTH_OK Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:50 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:50 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:50 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:47:50 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:50 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:50 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:50 localhost ceph-mon[296755]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626461.lrfquh (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626461.lrfquh on np0005626461.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring crash.np0005626461 (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626461 on np0005626461.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring osd.2 (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring osd.5 (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: Deploying daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:47:50 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:47:50 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing).paxosservice(auth 1..38) refresh upgraded, format 0 -> 3 Feb 23 04:47:50 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:47:50 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:47:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:51 localhost podman[296889]: Feb 23 04:47:51 localhost podman[296889]: 2026-02-23 09:47:51.320121761 +0000 UTC m=+0.074202778 container create 18cc7b4a83057960d6e22ea1d5563cfa5c137b73d9a92938628a5d7f3ee4a214 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_stonebraker, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.42.2, release=1770267347, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:47:51 localhost systemd[1]: Started libpod-conmon-18cc7b4a83057960d6e22ea1d5563cfa5c137b73d9a92938628a5d7f3ee4a214.scope. Feb 23 04:47:51 localhost systemd[1]: Started libcrun container. Feb 23 04:47:51 localhost podman[296889]: 2026-02-23 09:47:51.380807944 +0000 UTC m=+0.134888981 container init 18cc7b4a83057960d6e22ea1d5563cfa5c137b73d9a92938628a5d7f3ee4a214 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_stonebraker, RELEASE=main, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:47:51 localhost podman[296889]: 2026-02-23 09:47:51.289061172 +0000 UTC m=+0.043142269 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:51 localhost podman[296889]: 2026-02-23 09:47:51.390746178 +0000 UTC m=+0.144827225 container start 18cc7b4a83057960d6e22ea1d5563cfa5c137b73d9a92938628a5d7f3ee4a214 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_stonebraker, release=1770267347, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, RELEASE=main, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public) Feb 23 04:47:51 localhost blissful_stonebraker[296904]: 167 167 Feb 23 04:47:51 localhost systemd[1]: libpod-18cc7b4a83057960d6e22ea1d5563cfa5c137b73d9a92938628a5d7f3ee4a214.scope: Deactivated successfully. Feb 23 04:47:51 localhost podman[296889]: 2026-02-23 09:47:51.391197772 +0000 UTC m=+0.145278809 container attach 18cc7b4a83057960d6e22ea1d5563cfa5c137b73d9a92938628a5d7f3ee4a214 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_stonebraker, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.42.2, architecture=x86_64) Feb 23 04:47:51 localhost podman[296889]: 2026-02-23 09:47:51.398098973 +0000 UTC m=+0.152180010 container died 18cc7b4a83057960d6e22ea1d5563cfa5c137b73d9a92938628a5d7f3ee4a214 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_stonebraker, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:47:51 localhost systemd[1]: var-lib-containers-storage-overlay-0d81ae389391352f16a486cf7c9ba48d9bd57eadb0a9adccda297ac25e17163d-merged.mount: Deactivated successfully. Feb 23 04:47:51 localhost podman[296909]: 2026-02-23 09:47:51.487263236 +0000 UTC m=+0.078374315 container remove 18cc7b4a83057960d6e22ea1d5563cfa5c137b73d9a92938628a5d7f3ee4a214 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_stonebraker, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=1770267347, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Feb 23 04:47:51 localhost systemd[1]: libpod-conmon-18cc7b4a83057960d6e22ea1d5563cfa5c137b73d9a92938628a5d7f3ee4a214.scope: Deactivated successfully. Feb 23 04:47:51 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Feb 23 04:47:51 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Feb 23 04:47:51 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:47:51 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:47:51 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:47:52 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:47:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:47:52 localhost podman[296950]: 2026-02-23 09:47:52.511656059 +0000 UTC m=+0.081308574 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.43.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260216) Feb 23 04:47:52 localhost podman[296950]: 2026-02-23 09:47:52.581956027 +0000 UTC m=+0.151608542 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:47:52 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:47:52 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:47:52 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:47:52 localhost podman[297008]: Feb 23 04:47:52 localhost podman[297008]: 2026-02-23 09:47:52.911958088 +0000 UTC m=+0.073157856 container create f4a60533d7ecec6feb0053bdc8565e3cd3459b29ae90e1e26ae04c2106b87f32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_spence, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vcs-type=git, ceph=True, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:47:52 localhost systemd[1]: Started libpod-conmon-f4a60533d7ecec6feb0053bdc8565e3cd3459b29ae90e1e26ae04c2106b87f32.scope. Feb 23 04:47:52 localhost systemd[1]: Started libcrun container. Feb 23 04:47:52 localhost podman[297008]: 2026-02-23 09:47:52.978252233 +0000 UTC m=+0.139452011 container init f4a60533d7ecec6feb0053bdc8565e3cd3459b29ae90e1e26ae04c2106b87f32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_spence, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , release=1770267347, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True) Feb 23 04:47:52 localhost podman[297008]: 2026-02-23 09:47:52.882286361 +0000 UTC m=+0.043486159 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:52 localhost podman[297008]: 2026-02-23 09:47:52.988476035 +0000 UTC m=+0.149675813 container start f4a60533d7ecec6feb0053bdc8565e3cd3459b29ae90e1e26ae04c2106b87f32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_spence, io.buildah.version=1.42.2, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux , release=1770267347, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:47:52 localhost podman[297008]: 2026-02-23 09:47:52.988900278 +0000 UTC m=+0.150100056 container attach f4a60533d7ecec6feb0053bdc8565e3cd3459b29ae90e1e26ae04c2106b87f32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_spence, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, build-date=2026-02-09T10:25:24Z, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347) Feb 23 04:47:52 localhost loving_spence[297024]: 167 167 Feb 23 04:47:52 localhost systemd[1]: libpod-f4a60533d7ecec6feb0053bdc8565e3cd3459b29ae90e1e26ae04c2106b87f32.scope: Deactivated successfully. Feb 23 04:47:52 localhost podman[297008]: 2026-02-23 09:47:52.991985703 +0000 UTC m=+0.153185501 container died f4a60533d7ecec6feb0053bdc8565e3cd3459b29ae90e1e26ae04c2106b87f32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_spence, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., vcs-type=git, release=1770267347, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True) Feb 23 04:47:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:53 localhost podman[297029]: 2026-02-23 09:47:53.080880658 +0000 UTC m=+0.081360607 container remove f4a60533d7ecec6feb0053bdc8565e3cd3459b29ae90e1e26ae04c2106b87f32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_spence, release=1770267347, RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, distribution-scope=public, GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:47:53 localhost systemd[1]: libpod-conmon-f4a60533d7ecec6feb0053bdc8565e3cd3459b29ae90e1e26ae04c2106b87f32.scope: Deactivated successfully. Feb 23 04:47:53 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:47:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:47:53 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:47:53 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:47:53 localhost systemd[1]: var-lib-containers-storage-overlay-bb7a4fed79eba253623a7b20c16b06e4b235533de094efec1a5071740ee2f542-merged.mount: Deactivated successfully. Feb 23 04:47:53 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:47:53 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:47:53 localhost podman[297107]: Feb 23 04:47:54 localhost podman[297107]: 2026-02-23 09:47:54.003320457 +0000 UTC m=+0.072523196 container create 98aa671da96cf18a26ce3a36a836262b4e3e7260dfbddc2af5fd5c2d0cb7a0af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_galileo, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, release=1770267347, name=rhceph, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7) Feb 23 04:47:54 localhost systemd[1]: Started libpod-conmon-98aa671da96cf18a26ce3a36a836262b4e3e7260dfbddc2af5fd5c2d0cb7a0af.scope. Feb 23 04:47:54 localhost systemd[1]: Started libcrun container. Feb 23 04:47:54 localhost podman[297107]: 2026-02-23 09:47:54.068103706 +0000 UTC m=+0.137306475 container init 98aa671da96cf18a26ce3a36a836262b4e3e7260dfbddc2af5fd5c2d0cb7a0af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_galileo, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, version=7, io.buildah.version=1.42.2, release=1770267347, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7) Feb 23 04:47:54 localhost podman[297107]: 2026-02-23 09:47:53.974404543 +0000 UTC m=+0.043607362 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:54 localhost podman[297107]: 2026-02-23 09:47:54.076600816 +0000 UTC m=+0.145803555 container start 98aa671da96cf18a26ce3a36a836262b4e3e7260dfbddc2af5fd5c2d0cb7a0af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_galileo, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-type=git, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, release=1770267347, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:47:54 localhost podman[297107]: 2026-02-23 09:47:54.076765251 +0000 UTC m=+0.145968070 container attach 98aa671da96cf18a26ce3a36a836262b4e3e7260dfbddc2af5fd5c2d0cb7a0af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_galileo, name=rhceph, distribution-scope=public, GIT_BRANCH=main, ceph=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 04:47:54 localhost compassionate_galileo[297122]: 167 167 Feb 23 04:47:54 localhost systemd[1]: libpod-98aa671da96cf18a26ce3a36a836262b4e3e7260dfbddc2af5fd5c2d0cb7a0af.scope: Deactivated successfully. Feb 23 04:47:54 localhost podman[297107]: 2026-02-23 09:47:54.079509764 +0000 UTC m=+0.148712503 container died 98aa671da96cf18a26ce3a36a836262b4e3e7260dfbddc2af5fd5c2d0cb7a0af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_galileo, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container) Feb 23 04:47:54 localhost podman[297127]: 2026-02-23 09:47:54.171594688 +0000 UTC m=+0.079565203 container remove 98aa671da96cf18a26ce3a36a836262b4e3e7260dfbddc2af5fd5c2d0cb7a0af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_galileo, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z) Feb 23 04:47:54 localhost systemd[1]: libpod-conmon-98aa671da96cf18a26ce3a36a836262b4e3e7260dfbddc2af5fd5c2d0cb7a0af.scope: Deactivated successfully. Feb 23 04:47:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:47:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:47:54 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:47:54 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:47:54 localhost systemd[1]: var-lib-containers-storage-overlay-67d8642837f9f4e31497fed391955ab05f5496aa9c19651ec29098101efacc0e-merged.mount: Deactivated successfully. Feb 23 04:47:54 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:47:54 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:54 localhost ceph-mon[296755]: Reconfiguring osd.0 (monmap changed)... Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:47:54 localhost ceph-mon[296755]: Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:54 localhost ceph-mon[296755]: Reconfiguring osd.3 (monmap changed)... Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:47:54 localhost ceph-mon[296755]: Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:54 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:54 localhost podman[297198]: Feb 23 04:47:54 localhost podman[297198]: 2026-02-23 09:47:54.851660881 +0000 UTC m=+0.085109600 container create 483a5d0f541aca2abbcd8d762e9a0e2d2c4735d0101fe105e5d6988948b6d61d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldwasser, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, release=1770267347, io.buildah.version=1.42.2, ceph=True, io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Feb 23 04:47:54 localhost systemd[1]: Started libpod-conmon-483a5d0f541aca2abbcd8d762e9a0e2d2c4735d0101fe105e5d6988948b6d61d.scope. Feb 23 04:47:54 localhost systemd[1]: Started libcrun container. Feb 23 04:47:54 localhost podman[297198]: 2026-02-23 09:47:54.9098675 +0000 UTC m=+0.143316249 container init 483a5d0f541aca2abbcd8d762e9a0e2d2c4735d0101fe105e5d6988948b6d61d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldwasser, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, RELEASE=main, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux ) Feb 23 04:47:54 localhost podman[297198]: 2026-02-23 09:47:54.918948997 +0000 UTC m=+0.152397746 container start 483a5d0f541aca2abbcd8d762e9a0e2d2c4735d0101fe105e5d6988948b6d61d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldwasser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2) Feb 23 04:47:54 localhost podman[297198]: 2026-02-23 09:47:54.91936758 +0000 UTC m=+0.152816349 container attach 483a5d0f541aca2abbcd8d762e9a0e2d2c4735d0101fe105e5d6988948b6d61d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldwasser, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , vcs-type=git, version=7, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Feb 23 04:47:54 localhost wonderful_goldwasser[297214]: 167 167 Feb 23 04:47:54 localhost systemd[1]: libpod-483a5d0f541aca2abbcd8d762e9a0e2d2c4735d0101fe105e5d6988948b6d61d.scope: Deactivated successfully. Feb 23 04:47:54 localhost podman[297198]: 2026-02-23 09:47:54.921985189 +0000 UTC m=+0.155433958 container died 483a5d0f541aca2abbcd8d762e9a0e2d2c4735d0101fe105e5d6988948b6d61d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldwasser, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, RELEASE=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.42.2, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:47:54 localhost podman[297198]: 2026-02-23 09:47:54.824005656 +0000 UTC m=+0.057454395 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:47:55 localhost podman[297219]: 2026-02-23 09:47:55.014544678 +0000 UTC m=+0.079777519 container remove 483a5d0f541aca2abbcd8d762e9a0e2d2c4735d0101fe105e5d6988948b6d61d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldwasser, GIT_CLEAN=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Feb 23 04:47:55 localhost systemd[1]: libpod-conmon-483a5d0f541aca2abbcd8d762e9a0e2d2c4735d0101fe105e5d6988948b6d61d.scope: Deactivated successfully. Feb 23 04:47:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:55 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:47:55 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:47:55 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:47:55 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:47:55 localhost systemd[1]: var-lib-containers-storage-overlay-47fbfffa269bef4a0b296f73d1703d93111b4ec7edb7fb1fe9190cffe59bfdcc-merged.mount: Deactivated successfully. Feb 23 04:47:55 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:47:55 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:47:56 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Feb 23 04:47:56 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Feb 23 04:47:56 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:47:56 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:47:56 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:47:56 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:47:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:57 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Feb 23 04:47:57 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Feb 23 04:47:57 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:47:57 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:47:57 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:47:57 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:47:58 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:47:58 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:47:58 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:47:58 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:47:58 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:47:58 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:47:58 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:47:58 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:58 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:47:58 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:47:58 localhost ceph-mon[296755]: Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:47:58 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:47:58 localhost ceph-mon[296755]: Reconfiguring osd.1 (monmap changed)... Feb 23 04:47:58 localhost ceph-mon[296755]: Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:47:58 localhost ceph-mon[296755]: Reconfiguring osd.4 (monmap changed)... Feb 23 04:47:58 localhost ceph-mon[296755]: Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:58 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:47:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:47:59 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:47:59 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:47:59 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:47:59 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:47:59 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:47:59 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:48:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:48:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:48:00 localhost podman[297255]: 2026-02-23 09:48:00.672460464 +0000 UTC m=+0.088114772 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute) Feb 23 04:48:00 localhost podman[297255]: 2026-02-23 09:48:00.688871246 +0000 UTC m=+0.104525574 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 23 04:48:00 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:48:00 localhost systemd[1]: tmp-crun.py5npw.mount: Deactivated successfully. Feb 23 04:48:00 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:48:00 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:48:00 localhost podman[297254]: 2026-02-23 09:48:00.78918787 +0000 UTC m=+0.205166918 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 04:48:00 localhost podman[297254]: 2026-02-23 09:48:00.825035446 +0000 UTC m=+0.241014424 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:48:00 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:48:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:48:01 localhost systemd[1]: tmp-crun.tln1IG.mount: Deactivated successfully. Feb 23 04:48:01 localhost podman[297381]: 2026-02-23 09:48:01.458681223 +0000 UTC m=+0.089988960 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, version=7, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-type=git, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True) Feb 23 04:48:01 localhost podman[297381]: 2026-02-23 09:48:01.56597771 +0000 UTC m=+0.197285447 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, ceph=True, architecture=x86_64, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True) Feb 23 04:48:01 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:48:01 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:48:01 localhost openstack_network_exporter[243519]: ERROR 09:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:48:01 localhost openstack_network_exporter[243519]: Feb 23 04:48:01 localhost openstack_network_exporter[243519]: ERROR 09:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:48:01 localhost openstack_network_exporter[243519]: Feb 23 04:48:02 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:48:02 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:48:02 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:48:02 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:48:02 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:02 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:02 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:48:02 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:48:02 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:48:02 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:48:02 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:02 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:02 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:02 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.34411 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Feb 23 04:48:03 localhost ceph-mgr[285904]: [cephadm INFO root] Reconfig service osd.default_drive_group Feb 23 04:48:03 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Feb 23 04:48:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:48:03 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev d55252be-78a9-4e66-bd1a-51ec02ce751c (Updating node-proxy deployment (+4 -> 4)) Feb 23 04:48:03 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev d55252be-78a9-4e66-bd1a-51ec02ce751c (Updating node-proxy deployment (+4 -> 4)) Feb 23 04:48:03 localhost ceph-mgr[285904]: [progress INFO root] Completed event d55252be-78a9-4e66-bd1a-51ec02ce751c (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Feb 23 04:48:03 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:48:03 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:48:03 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:48:03 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:48:04 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:48:04 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:48:04 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:48:04 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:48:04 localhost ceph-mon[296755]: Reconfig service osd.default_drive_group Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:48:04 localhost ceph-mon[296755]: Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:48:04 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:04 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:48:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:48:05 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:48:05 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:48:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:48:05 localhost systemd[1]: tmp-crun.QNXmQn.mount: Deactivated successfully. Feb 23 04:48:05 localhost podman[297606]: 2026-02-23 09:48:05.754738118 +0000 UTC m=+0.092392903 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:48:05 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:48:05 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:48:05 localhost podman[297606]: 2026-02-23 09:48:05.791842952 +0000 UTC m=+0.129497777 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:48:05 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:48:06 localhost podman[297661]: Feb 23 04:48:06 localhost podman[297661]: 2026-02-23 09:48:06.165851167 +0000 UTC m=+0.069258727 container create 95c1f3c96eec15193f032f6480052e02d58c9fc51dfaa915a1923fee4ebb3a6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_roentgen, ceph=True, release=1770267347, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:48:06 localhost systemd[1]: Started libpod-conmon-95c1f3c96eec15193f032f6480052e02d58c9fc51dfaa915a1923fee4ebb3a6e.scope. Feb 23 04:48:06 localhost systemd[1]: Started libcrun container. Feb 23 04:48:06 localhost podman[297661]: 2026-02-23 09:48:06.230899104 +0000 UTC m=+0.134306664 container init 95c1f3c96eec15193f032f6480052e02d58c9fc51dfaa915a1923fee4ebb3a6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_roentgen, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2026-02-09T10:25:24Z, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, version=7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, vcs-type=git, distribution-scope=public, com.redhat.component=rhceph-container, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Feb 23 04:48:06 localhost podman[297661]: 2026-02-23 09:48:06.13353733 +0000 UTC m=+0.036944930 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:48:06 localhost podman[297661]: 2026-02-23 09:48:06.241965472 +0000 UTC m=+0.145373032 container start 95c1f3c96eec15193f032f6480052e02d58c9fc51dfaa915a1923fee4ebb3a6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_roentgen, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , version=7, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347) Feb 23 04:48:06 localhost podman[297661]: 2026-02-23 09:48:06.242273661 +0000 UTC m=+0.145681221 container attach 95c1f3c96eec15193f032f6480052e02d58c9fc51dfaa915a1923fee4ebb3a6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_roentgen, name=rhceph, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main, release=1770267347, RELEASE=main) Feb 23 04:48:06 localhost exciting_roentgen[297676]: 167 167 Feb 23 04:48:06 localhost systemd[1]: libpod-95c1f3c96eec15193f032f6480052e02d58c9fc51dfaa915a1923fee4ebb3a6e.scope: Deactivated successfully. Feb 23 04:48:06 localhost podman[297661]: 2026-02-23 09:48:06.244765227 +0000 UTC m=+0.148172817 container died 95c1f3c96eec15193f032f6480052e02d58c9fc51dfaa915a1923fee4ebb3a6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_roentgen, version=7, maintainer=Guillaume Abrioux , vcs-type=git, name=rhceph, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., release=1770267347, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main) Feb 23 04:48:06 localhost podman[297681]: 2026-02-23 09:48:06.337630734 +0000 UTC m=+0.080341585 container remove 95c1f3c96eec15193f032f6480052e02d58c9fc51dfaa915a1923fee4ebb3a6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_roentgen, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Feb 23 04:48:06 localhost systemd[1]: libpod-conmon-95c1f3c96eec15193f032f6480052e02d58c9fc51dfaa915a1923fee4ebb3a6e.scope: Deactivated successfully. Feb 23 04:48:06 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:48:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:48:06 localhost systemd[1]: var-lib-containers-storage-overlay-78bb246b3abec542ffd71a17c6a50163f129c3954964c87dcd6d8e2aa78d1b1c-merged.mount: Deactivated successfully. Feb 23 04:48:06 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:48:06 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:48:06 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:06 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:06 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:06 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:48:06 localhost ceph-mon[296755]: Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:48:06 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:06 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:06 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:06 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:06 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:06 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:48:06 localhost ceph-mon[296755]: Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:48:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 23 04:48:07 localhost podman[297757]: Feb 23 04:48:07 localhost podman[297757]: 2026-02-23 09:48:07.200825434 +0000 UTC m=+0.074466046 container create a433f486e3f7668921e68c5c92949421133382389ee496e4d5ba2a7489d9daf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_volhard, name=rhceph, io.openshift.tags=rhceph ceph, release=1770267347, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:48:07 localhost systemd[1]: Started libpod-conmon-a433f486e3f7668921e68c5c92949421133382389ee496e4d5ba2a7489d9daf2.scope. Feb 23 04:48:07 localhost systemd[1]: Started libcrun container. Feb 23 04:48:07 localhost podman[297757]: 2026-02-23 09:48:07.26552536 +0000 UTC m=+0.139165962 container init a433f486e3f7668921e68c5c92949421133382389ee496e4d5ba2a7489d9daf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_volhard, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, release=1770267347, version=7, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Feb 23 04:48:07 localhost podman[297757]: 2026-02-23 09:48:07.17093586 +0000 UTC m=+0.044576502 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:48:07 localhost podman[297757]: 2026-02-23 09:48:07.28059621 +0000 UTC m=+0.154236812 container start a433f486e3f7668921e68c5c92949421133382389ee496e4d5ba2a7489d9daf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_volhard, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, RELEASE=main, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z) Feb 23 04:48:07 localhost podman[297757]: 2026-02-23 09:48:07.281019623 +0000 UTC m=+0.154660275 container attach a433f486e3f7668921e68c5c92949421133382389ee496e4d5ba2a7489d9daf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_volhard, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, CEPH_POINT_RELEASE=) Feb 23 04:48:07 localhost pensive_volhard[297774]: 167 167 Feb 23 04:48:07 localhost systemd[1]: libpod-a433f486e3f7668921e68c5c92949421133382389ee496e4d5ba2a7489d9daf2.scope: Deactivated successfully. Feb 23 04:48:07 localhost podman[297757]: 2026-02-23 09:48:07.28420327 +0000 UTC m=+0.157843882 container died a433f486e3f7668921e68c5c92949421133382389ee496e4d5ba2a7489d9daf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_volhard, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=1770267347, GIT_BRANCH=main, version=7, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:48:07 localhost podman[297779]: 2026-02-23 09:48:07.374556771 +0000 UTC m=+0.082232754 container remove a433f486e3f7668921e68c5c92949421133382389ee496e4d5ba2a7489d9daf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_volhard, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=1770267347, version=7, maintainer=Guillaume Abrioux , architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:48:07 localhost systemd[1]: libpod-conmon-a433f486e3f7668921e68c5c92949421133382389ee496e4d5ba2a7489d9daf2.scope: Deactivated successfully. Feb 23 04:48:07 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:48:07 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:48:07 localhost systemd[1]: tmp-crun.ICJ8nE.mount: Deactivated successfully. Feb 23 04:48:07 localhost systemd[1]: var-lib-containers-storage-overlay-18b0155c92431fd05b1880f6a3d92771968d1556555adfab7ba1259b3f2e30dc-merged.mount: Deactivated successfully. Feb 23 04:48:07 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mon.np0005626465 172.18.0.107:0/438899899; not ready for session (expect reconnect) Feb 23 04:48:07 localhost ceph-mgr[285904]: mgr finish mon failed to return metadata for mon.np0005626465: (2) No such file or directory Feb 23 04:48:07 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:07.862+0000 7f33f359e640 -1 mgr handle_mgr_map I was active but no longer am Feb 23 04:48:07 localhost systemd[1]: session-65.scope: Deactivated successfully. Feb 23 04:48:07 localhost systemd[1]: session-65.scope: Consumed 23.478s CPU time. Feb 23 04:48:07 localhost systemd-logind[759]: Session 65 logged out. Waiting for processes to exit. Feb 23 04:48:07 localhost systemd-logind[759]: Removed session 65. Feb 23 04:48:07 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: ignoring --setuser ceph since I am not root Feb 23 04:48:07 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: ignoring --setgroup ceph since I am not root Feb 23 04:48:07 localhost ceph-mon[296755]: mon.np0005626465@-1(probing) e11 handle_auth_request failed to assign global_id Feb 23 04:48:07 localhost ceph-mgr[285904]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mgr, pid 2 Feb 23 04:48:07 localhost ceph-mgr[285904]: pidfile_write: ignore empty --pid-file Feb 23 04:48:07 localhost ceph-mgr[285904]: mgr[py] Loading python module 'alerts' Feb 23 04:48:08 localhost ceph-mgr[285904]: mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 23 04:48:08 localhost ceph-mgr[285904]: mgr[py] Loading python module 'balancer' Feb 23 04:48:08 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:08.043+0000 7fc44c910140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 23 04:48:08 localhost ceph-mgr[285904]: mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 23 04:48:08 localhost ceph-mgr[285904]: mgr[py] Loading python module 'cephadm' Feb 23 04:48:08 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:08.112+0000 7fc44c910140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 23 04:48:08 localhost sshd[297827]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:48:08 localhost systemd-logind[759]: New session 68 of user ceph-admin. Feb 23 04:48:08 localhost systemd[1]: Started Session 68 of User ceph-admin. Feb 23 04:48:08 localhost ceph-mgr[285904]: mgr[py] Loading python module 'crash' Feb 23 04:48:08 localhost ceph-mgr[285904]: mgr[py] Module crash has missing NOTIFY_TYPES member Feb 23 04:48:08 localhost ceph-mgr[285904]: mgr[py] Loading python module 'dashboard' Feb 23 04:48:08 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:08.726+0000 7fc44c910140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Feb 23 04:48:08 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing).osd e83 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Feb 23 04:48:08 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing).osd e83 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Feb 23 04:48:08 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing).osd e84 e84: 6 total, 6 up, 6 in Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26597 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26597 172.18.0.107:0/4195960694' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: from='client.? 172.18.0.200:0/3009308721' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: Activating manager daemon np0005626463.wtksup Feb 23 04:48:08 localhost ceph-mon[296755]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 23 04:48:08 localhost ceph-mon[296755]: Manager daemon np0005626463.wtksup is now available Feb 23 04:48:08 localhost ceph-mon[296755]: removing stray HostCache host record np0005626460.localdomain.devices.0 Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain.devices.0"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain.devices.0"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain.devices.0"}]': finished Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain.devices.0"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain.devices.0"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005626460.localdomain.devices.0"}]': finished Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626463.wtksup/mirror_snapshot_schedule"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626463.wtksup/mirror_snapshot_schedule"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626463.wtksup/trash_purge_schedule"} : dispatch Feb 23 04:48:08 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626463.wtksup/trash_purge_schedule"} : dispatch Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Loading python module 'devicehealth' Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Loading python module 'diskprediction_local' Feb 23 04:48:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:09.270+0000 7fc44c910140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 23 04:48:09 localhost podman[297939]: 2026-02-23 09:48:09.386512191 +0000 UTC m=+0.080431458 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, version=7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:48:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Feb 23 04:48:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Feb 23 04:48:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: from numpy import show_config as show_numpy_config Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Loading python module 'influx' Feb 23 04:48:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:09.406+0000 7fc44c910140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Module influx has missing NOTIFY_TYPES member Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Loading python module 'insights' Feb 23 04:48:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:09.463+0000 7fc44c910140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Feb 23 04:48:09 localhost podman[297939]: 2026-02-23 09:48:09.484703531 +0000 UTC m=+0.178622798 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_CLEAN=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, release=1770267347, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z) Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Loading python module 'iostat' Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Loading python module 'k8sevents' Feb 23 04:48:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:09.576+0000 7fc44c910140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Loading python module 'localpool' Feb 23 04:48:09 localhost ceph-mgr[285904]: mgr[py] Loading python module 'mds_autoscaler' Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Loading python module 'mirroring' Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Loading python module 'nfs' Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Loading python module 'orchestrator' Feb 23 04:48:10 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:10.281+0000 7fc44c910140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Loading python module 'osd_perf_query' Feb 23 04:48:10 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:10.428+0000 7fc44c910140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Loading python module 'osd_support' Feb 23 04:48:10 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:10.490+0000 7fc44c910140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Loading python module 'pg_autoscaler' Feb 23 04:48:10 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:10.544+0000 7fc44c910140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Loading python module 'progress' Feb 23 04:48:10 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:10.611+0000 7fc44c910140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Module progress has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Loading python module 'prometheus' Feb 23 04:48:10 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:10.668+0000 7fc44c910140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 23 04:48:10 localhost ceph-mgr[285904]: mgr[py] Loading python module 'rbd_support' Feb 23 04:48:10 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:10.963+0000 7fc44c910140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 23 04:48:11 localhost ceph-mon[296755]: [23/Feb/2026:09:48:09] ENGINE Bus STARTING Feb 23 04:48:11 localhost ceph-mon[296755]: [23/Feb/2026:09:48:09] ENGINE Serving on http://172.18.0.106:8765 Feb 23 04:48:11 localhost ceph-mon[296755]: [23/Feb/2026:09:48:09] ENGINE Serving on https://172.18.0.106:7150 Feb 23 04:48:11 localhost ceph-mon[296755]: [23/Feb/2026:09:48:09] ENGINE Bus STARTED Feb 23 04:48:11 localhost ceph-mon[296755]: [23/Feb/2026:09:48:09] ENGINE Client ('172.18.0.106', 50730) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 23 04:48:11 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:11 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:11 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:11 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:11 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:11 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:11 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:11 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Loading python module 'restful' Feb 23 04:48:11 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:11.045+0000 7fc44c910140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Loading python module 'rgw' Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Loading python module 'rook' Feb 23 04:48:11 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:11.365+0000 7fc44c910140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Module rook has missing NOTIFY_TYPES member Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Loading python module 'selftest' Feb 23 04:48:11 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:11.781+0000 7fc44c910140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Loading python module 'snap_schedule' Feb 23 04:48:11 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:11.844+0000 7fc44c910140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Loading python module 'stats' Feb 23 04:48:11 localhost ceph-mgr[285904]: mgr[py] Loading python module 'status' Feb 23 04:48:12 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:12.042+0000 7fc44c910140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Module status has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Loading python module 'telegraf' Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Loading python module 'telemetry' Feb 23 04:48:12 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:12.110+0000 7fc44c910140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Loading python module 'test_orchestrator' Feb 23 04:48:12 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:12.257+0000 7fc44c910140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Loading python module 'volumes' Feb 23 04:48:12 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:12.405+0000 7fc44c910140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Loading python module 'zabbix' Feb 23 04:48:12 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:12.590+0000 7fc44c910140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:48:12.648+0000 7fc44c910140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 23 04:48:12 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x5583b17791e0 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Feb 23 04:48:12 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.106:6810/1055095676 Feb 23 04:48:12 localhost podman[241086]: time="2026-02-23T09:48:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:48:12 localhost podman[241086]: @ - - [23/Feb/2026:09:48:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:48:12 localhost podman[241086]: @ - - [23/Feb/2026:09:48:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17784 "" "Go-http-client/1.1" Feb 23 04:48:12 localhost nova_compute[280321]: 2026-02-23 09:48:12.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:48:12 localhost nova_compute[280321]: 2026-02-23 09:48:12.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:48:12 localhost nova_compute[280321]: 2026-02-23 09:48:12.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:48:12 localhost nova_compute[280321]: 2026-02-23 09:48:12.916 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:48:12 localhost nova_compute[280321]: 2026-02-23 09:48:12.916 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:48:12 localhost nova_compute[280321]: 2026-02-23 09:48:12.916 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:48:12 localhost nova_compute[280321]: 2026-02-23 09:48:12.917 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:48:12 localhost nova_compute[280321]: 2026-02-23 09:48:12.917 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd/host:np0005626461", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:48:13 localhost ceph-mon[296755]: mon.np0005626465@-1(synchronizing) e11 handle_auth_request failed to assign global_id Feb 23 04:48:13 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:48:13 localhost sshd[298557]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:48:13 localhost nova_compute[280321]: 2026-02-23 09:48:13.382 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:48:13 localhost nova_compute[280321]: 2026-02-23 09:48:13.600 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:48:13 localhost nova_compute[280321]: 2026-02-23 09:48:13.602 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12368MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:48:13 localhost nova_compute[280321]: 2026-02-23 09:48:13.603 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:48:13 localhost nova_compute[280321]: 2026-02-23 09:48:13.603 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:48:13 localhost nova_compute[280321]: 2026-02-23 09:48:13.670 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:48:13 localhost nova_compute[280321]: 2026-02-23 09:48:13.671 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:48:13 localhost nova_compute[280321]: 2026-02-23 09:48:13.689 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:48:13 localhost ceph-mon[296755]: mon.np0005626465@-1(probing) e11 handle_auth_request failed to assign global_id Feb 23 04:48:14 localhost nova_compute[280321]: 2026-02-23 09:48:14.162 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:48:14 localhost nova_compute[280321]: 2026-02-23 09:48:14.168 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:48:14 localhost nova_compute[280321]: 2026-02-23 09:48:14.207 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:48:14 localhost nova_compute[280321]: 2026-02-23 09:48:14.209 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:48:14 localhost nova_compute[280321]: 2026-02-23 09:48:14.209 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:48:15 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 04:48:15 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 04:48:15 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 04:48:15 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:48:15 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 04:48:15 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626461.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:48:15 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:48:15 localhost nova_compute[280321]: 2026-02-23 09:48:15.211 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:48:15 localhost nova_compute[280321]: 2026-02-23 09:48:15.211 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:48:15 localhost nova_compute[280321]: 2026-02-23 09:48:15.212 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:48:15 localhost nova_compute[280321]: 2026-02-23 09:48:15.233 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:48:15 localhost nova_compute[280321]: 2026-02-23 09:48:15.234 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:48:15 localhost sshd[298884]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:48:15 localhost nova_compute[280321]: 2026-02-23 09:48:15.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:48:15 localhost nova_compute[280321]: 2026-02-23 09:48:15.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:48:15 localhost nova_compute[280321]: 2026-02-23 09:48:15.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:48:15 localhost nova_compute[280321]: 2026-02-23 09:48:15.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:48:15 localhost nova_compute[280321]: 2026-02-23 09:48:15.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:48:17 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:48:17 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:48:17 localhost ceph-mon[296755]: Updating np0005626461.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:48:17 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:48:17 localhost ceph-mon[296755]: Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:17 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:48:17 localhost ceph-mon[296755]: Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:48:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:48:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:48:17 localhost podman[298905]: 2026-02-23 09:48:17.325528092 +0000 UTC m=+0.085878145 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-type=git, build-date=2026-02-05T04:57:10Z, distribution-scope=public, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.7, architecture=x86_64, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 23 04:48:17 localhost systemd[1]: tmp-crun.qCeCKl.mount: Deactivated successfully. Feb 23 04:48:17 localhost podman[298904]: 2026-02-23 09:48:17.378627144 +0000 UTC m=+0.139396979 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:48:17 localhost podman[298904]: 2026-02-23 09:48:17.389926239 +0000 UTC m=+0.150696064 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:48:17 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:48:17 localhost podman[298905]: 2026-02-23 09:48:17.44559828 +0000 UTC m=+0.205948323 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, build-date=2026-02-05T04:57:10Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64) Feb 23 04:48:17 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:48:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:48:22 localhost podman[298966]: 2026-02-23 09:48:22.979822438 +0000 UTC m=+0.058311613 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:48:23 localhost podman[298966]: 2026-02-23 09:48:23.063843075 +0000 UTC m=+0.142332300 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:48:23 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:23 localhost ceph-mon[296755]: Saving service mon spec with placement label:mon Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:23 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:27 localhost ceph-mon[296755]: Reconfiguring mon.np0005626461 (monmap changed)... Feb 23 04:48:27 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:48:27 localhost ceph-mon[296755]: Reconfiguring daemon mon.np0005626461 on np0005626461.localdomain Feb 23 04:48:27 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Feb 23 04:48:27 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:27 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:27 localhost ceph-mon[296755]: Reconfiguring mon.np0005626463 (monmap changed)... Feb 23 04:48:27 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:48:27 localhost ceph-mon[296755]: Reconfiguring daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:48:27 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:27 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:27 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.145079) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:48:27 localhost ceph-mon[296755]: Reconfiguring mon.np0005626466 (monmap changed)... Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Feb 23 04:48:27 localhost ceph-mon[296755]: Reconfiguring daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840107145201, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 12045, "num_deletes": 294, "total_data_size": 21175896, "memory_usage": 21988496, "flush_reason": "Manual Compaction"} Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Feb 23 04:48:27 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:27 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840107211482, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 18527224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 12050, "table_properties": {"data_size": 18460146, "index_size": 37468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28549, "raw_key_size": 303207, "raw_average_key_size": 26, "raw_value_size": 18263751, "raw_average_value_size": 1602, "num_data_blocks": 1443, "num_entries": 11400, "num_filter_entries": 11400, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 1771840070, "file_creation_time": 1771840107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 66547 microseconds, and 34179 cpu microseconds. Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.211633) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 18527224 bytes OK Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.211690) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.213862) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.213889) EVENT_LOG_v1 {"time_micros": 1771840107213880, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.213912) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 21092007, prev total WAL file size 21092007, number of live WAL files 2. Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.217531) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(17MB) 8(1887B)] Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840107217660, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 18529111, "oldest_snapshot_seqno": -1} Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 11149 keys, 18523940 bytes, temperature: kUnknown Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840107300915, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 18523940, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18457519, "index_size": 37456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27909, "raw_key_size": 298351, "raw_average_key_size": 26, "raw_value_size": 18264402, "raw_average_value_size": 1638, "num_data_blocks": 1442, "num_entries": 11149, "num_filter_entries": 11149, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.301342) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 18523940 bytes Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.303777) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.3 rd, 222.2 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(17.7, 0.0 +0.0 blob) out(17.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 11405, records dropped: 256 output_compression: NoCompression Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.303813) EVENT_LOG_v1 {"time_micros": 1771840107303798, "job": 4, "event": "compaction_finished", "compaction_time_micros": 83359, "compaction_time_cpu_micros": 45310, "output_level": 6, "num_output_files": 1, "total_output_size": 18523940, "num_input_records": 11405, "num_output_records": 11149, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840107306903, "job": 4, "event": "table_file_deletion", "file_number": 14} Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840107306984, "job": 4, "event": "table_file_deletion", "file_number": 8} Feb 23 04:48:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:27.217396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:48:30 localhost systemd[1]: session-66.scope: Deactivated successfully. Feb 23 04:48:30 localhost systemd[1]: session-66.scope: Consumed 1.709s CPU time. Feb 23 04:48:30 localhost systemd-logind[759]: Session 66 logged out. Waiting for processes to exit. Feb 23 04:48:30 localhost systemd-logind[759]: Removed session 66. Feb 23 04:48:30 localhost ceph-mon[296755]: mon.np0005626465@-1(probing) e11 handle_auth_request failed to assign global_id Feb 23 04:48:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:48:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:48:30 localhost ceph-mon[296755]: mon.np0005626465@-1(probing) e11 handle_auth_request failed to assign global_id Feb 23 04:48:31 localhost podman[298994]: 2026-02-23 09:48:31.011370284 +0000 UTC m=+0.080807120 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:48:31 localhost podman[298994]: 2026-02-23 09:48:31.044753604 +0000 UTC m=+0.114190360 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Feb 23 04:48:31 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:48:31 localhost podman[298995]: 2026-02-23 09:48:31.056489842 +0000 UTC m=+0.121705578 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:48:31 localhost podman[298995]: 2026-02-23 09:48:31.069776638 +0000 UTC m=+0.134992434 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216) Feb 23 04:48:31 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:48:31 localhost ceph-mon[296755]: mon.np0005626465@-1(probing) e11 handle_auth_request failed to assign global_id Feb 23 04:48:31 localhost openstack_network_exporter[243519]: ERROR 09:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:48:31 localhost openstack_network_exporter[243519]: Feb 23 04:48:31 localhost openstack_network_exporter[243519]: ERROR 09:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:48:31 localhost openstack_network_exporter[243519]: Feb 23 04:48:32 localhost ceph-mon[296755]: mon.np0005626465@-1(probing) e11 handle_auth_request failed to assign global_id Feb 23 04:48:35 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x5583b17791e0 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Feb 23 04:48:35 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Feb 23 04:48:35 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Feb 23 04:48:35 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 23 04:48:35 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 23 04:48:35 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x5583b1779080 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Feb 23 04:48:35 localhost ceph-osd[32652]: --2- [v2:172.18.0.107:6804/574373088,v1:172.18.0.107:6805/574373088] >> [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] conn(0x562d2967ec00 0x562d254b4100 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2 Feb 23 04:48:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:48:36 localhost podman[299033]: 2026-02-23 09:48:36.003627146 +0000 UTC m=+0.078959543 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:48:36 localhost podman[299033]: 2026-02-23 09:48:36.012066614 +0000 UTC m=+0.087398991 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:48:36 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:48:39 localhost ceph-mon[296755]: Remove daemons mon.np0005626461 Feb 23 04:48:39 localhost ceph-mon[296755]: Safe to remove mon.np0005626461: new quorum should be ['np0005626466', 'np0005626463'] (from ['np0005626466', 'np0005626463']) Feb 23 04:48:39 localhost ceph-mon[296755]: Removing monitor np0005626461 from monmap... Feb 23 04:48:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "mon rm", "name": "np0005626461"} : dispatch Feb 23 04:48:39 localhost ceph-mon[296755]: Removing daemon mon.np0005626461 from np0005626461.localdomain -- ports [] Feb 23 04:48:39 localhost ceph-mon[296755]: mon.np0005626463 calling monitor election Feb 23 04:48:39 localhost ceph-mon[296755]: mon.np0005626466 calling monitor election Feb 23 04:48:39 localhost ceph-mon[296755]: mon.np0005626466 is new leader, mons np0005626466,np0005626463 in quorum (ranks 0,1) Feb 23 04:48:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:48:39 localhost ceph-mon[296755]: overall HEALTH_OK Feb 23 04:48:39 localhost ceph-mon[296755]: Updating np0005626461.localdomain:/etc/ceph/ceph.conf Feb 23 04:48:39 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:48:39 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:48:39 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:48:39 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x5583b17791e0 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Feb 23 04:48:40 localhost systemd[1]: Stopping User Manager for UID 1003... Feb 23 04:48:40 localhost systemd[295273]: Activating special unit Exit the Session... Feb 23 04:48:40 localhost systemd[295273]: Stopped target Main User Target. Feb 23 04:48:40 localhost systemd[295273]: Stopped target Basic System. Feb 23 04:48:40 localhost systemd[295273]: Stopped target Paths. Feb 23 04:48:40 localhost systemd[295273]: Stopped target Sockets. Feb 23 04:48:40 localhost systemd[295273]: Stopped target Timers. Feb 23 04:48:40 localhost systemd[295273]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 23 04:48:40 localhost systemd[295273]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 04:48:40 localhost systemd[295273]: Closed D-Bus User Message Bus Socket. Feb 23 04:48:40 localhost systemd[295273]: Stopped Create User's Volatile Files and Directories. Feb 23 04:48:40 localhost systemd[295273]: Removed slice User Application Slice. Feb 23 04:48:40 localhost systemd[295273]: Reached target Shutdown. Feb 23 04:48:40 localhost systemd[295273]: Finished Exit the Session. Feb 23 04:48:40 localhost systemd[295273]: Reached target Exit the Session. Feb 23 04:48:40 localhost systemd[1]: user@1003.service: Deactivated successfully. Feb 23 04:48:40 localhost systemd[1]: Stopped User Manager for UID 1003. Feb 23 04:48:40 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Feb 23 04:48:40 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Feb 23 04:48:40 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Feb 23 04:48:40 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Feb 23 04:48:40 localhost systemd[1]: Removed slice User Slice of UID 1003. Feb 23 04:48:40 localhost systemd[1]: user-1003.slice: Consumed 2.191s CPU time. Feb 23 04:48:41 localhost ceph-mon[296755]: mon.np0005626465@-1(probing) e13 my rank is now 2 (was -1) Feb 23 04:48:41 localhost ceph-mon[296755]: log_channel(cluster) log [INF] : mon.np0005626465 calling monitor election Feb 23 04:48:41 localhost ceph-mon[296755]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 Feb 23 04:48:41 localhost ceph-mon[296755]: mon.np0005626465@2(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:48:42 localhost podman[241086]: time="2026-02-23T09:48:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:48:42 localhost podman[241086]: @ - - [23/Feb/2026:09:48:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:48:42 localhost podman[241086]: @ - - [23/Feb/2026:09:48:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17786 "" "Go-http-client/1.1" Feb 23 04:48:43 localhost ceph-mds[284726]: mds.beacon.mds.np0005626465.drvnoy missed beacon ack from the monitors Feb 23 04:48:44 localhost ceph-mon[296755]: mon.np0005626465@2(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:48:44 localhost ceph-mon[296755]: mon.np0005626465@2(peon) e13 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Feb 23 04:48:44 localhost ceph-mon[296755]: mon.np0005626465@2(peon) e13 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Feb 23 04:48:44 localhost ceph-mon[296755]: mon.np0005626465@2(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:48:44 localhost ceph-mon[296755]: mgrc update_daemon_metadata mon.np0005626465 metadata {addrs=[v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable),ceph_version_short=18.2.1-381.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005626465.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005626465.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Feb 23 04:48:44 localhost ceph-mon[296755]: mon.np0005626465@2(peon) e13 handle_auth_request failed to assign global_id Feb 23 04:48:44 localhost ceph-mon[296755]: mon.np0005626466 calling monitor election Feb 23 04:48:44 localhost ceph-mon[296755]: mon.np0005626463 calling monitor election Feb 23 04:48:44 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626461.lrfquh (monmap changed)... Feb 23 04:48:44 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:48:44 localhost ceph-mon[296755]: mon.np0005626465 calling monitor election Feb 23 04:48:44 localhost ceph-mon[296755]: mon.np0005626466 is new leader, mons np0005626466,np0005626463,np0005626465 in quorum (ranks 0,1,2) Feb 23 04:48:44 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626461.lrfquh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:48:44 localhost ceph-mon[296755]: overall HEALTH_OK Feb 23 04:48:44 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:45 localhost ceph-mon[296755]: mon.np0005626465@2(peon).osd e84 _set_new_cache_sizes cache_size:1019659280 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:48:45 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626461.lrfquh on np0005626461.localdomain Feb 23 04:48:45 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:45 localhost ceph-mon[296755]: Removed label mgr from host np0005626461.localdomain Feb 23 04:48:45 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:45 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:45 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:48:45 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626461", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.567125) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840126567167, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 593, "num_deletes": 257, "total_data_size": 562174, "memory_usage": 575056, "flush_reason": "Manual Compaction"} Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840126572082, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 457385, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12051, "largest_seqno": 12643, "table_properties": {"data_size": 453822, "index_size": 1290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9343, "raw_average_key_size": 19, "raw_value_size": 445882, "raw_average_value_size": 952, "num_data_blocks": 50, "num_entries": 468, "num_filter_entries": 468, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840115, "oldest_key_time": 1771840115, "file_creation_time": 1771840126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 5080 microseconds, and 2166 cpu microseconds. Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.572199) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 457385 bytes OK Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.572257) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.574158) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.574182) EVENT_LOG_v1 {"time_micros": 1771840126574175, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.574201) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 558270, prev total WAL file size 558270, number of live WAL files 2. Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.575001) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323733' seq:72057594037927935, type:22 .. '6B760031353235' seq:0, type:0; will stop at (end) Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(446KB)], [15(17MB)] Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840126575070, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 18981325, "oldest_snapshot_seqno": -1} Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 11080 keys, 17918494 bytes, temperature: kUnknown Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840126662462, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 17918494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17853632, "index_size": 36089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27717, "raw_key_size": 298868, "raw_average_key_size": 26, "raw_value_size": 17662525, "raw_average_value_size": 1594, "num_data_blocks": 1367, "num_entries": 11080, "num_filter_entries": 11080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.662769) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 17918494 bytes Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.664561) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.0 rd, 204.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 17.7 +0.0 blob) out(17.1 +0.0 blob), read-write-amplify(80.7) write-amplify(39.2) OK, records in: 11617, records dropped: 537 output_compression: NoCompression Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.664591) EVENT_LOG_v1 {"time_micros": 1771840126664578, "job": 6, "event": "compaction_finished", "compaction_time_micros": 87474, "compaction_time_cpu_micros": 47000, "output_level": 6, "num_output_files": 1, "total_output_size": 17918494, "num_input_records": 11617, "num_output_records": 11080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840126664792, "job": 6, "event": "table_file_deletion", "file_number": 17} Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840126667269, "job": 6, "event": "table_file_deletion", "file_number": 15} Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.574898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.667330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.667337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.667340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.667343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:48:46 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:48:46.667346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:48:46 localhost ceph-mon[296755]: Reconfiguring crash.np0005626461 (monmap changed)... Feb 23 04:48:46 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626461 on np0005626461.localdomain Feb 23 04:48:46 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:46 localhost ceph-mon[296755]: Removed label _admin from host np0005626461.localdomain Feb 23 04:48:46 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:46 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:46 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:48:46 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:48:47 localhost ceph-mon[296755]: Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:48:47 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:48:47 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:47 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:47 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:48:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:48:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:48:48 localhost podman[299398]: 2026-02-23 09:48:48.008649366 +0000 UTC m=+0.082675147 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:48:48 localhost podman[299398]: 2026-02-23 09:48:48.018508637 +0000 UTC m=+0.092534398 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:48:48 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:48:48 localhost podman[299399]: 2026-02-23 09:48:48.106240677 +0000 UTC m=+0.176888145 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, vcs-type=git, version=9.7, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:48:48 localhost podman[299399]: 2026-02-23 09:48:48.143142815 +0000 UTC m=+0.213790313 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.7, name=ubi9/ubi-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, release=1770267347) Feb 23 04:48:48 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:48:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:48:48.306 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:48:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:48:48.306 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:48:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:48:48.306 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:48:48 localhost ceph-mon[296755]: Reconfiguring osd.2 (monmap changed)... Feb 23 04:48:48 localhost ceph-mon[296755]: Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:48:48 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:48 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:48 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:48:50 localhost ceph-mon[296755]: Reconfiguring osd.5 (monmap changed)... Feb 23 04:48:50 localhost ceph-mon[296755]: Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:48:50 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:50 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:50 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:48:50 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:48:50 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:50 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:50 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:48:50 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:48:50 localhost ceph-mon[296755]: mon.np0005626465@2(peon).osd e84 _set_new_cache_sizes cache_size:1020048219 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:48:51 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:48:51 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:48:51 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:48:51 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:48:51 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:51 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:51 localhost ceph-mon[296755]: Reconfiguring mon.np0005626463 (monmap changed)... Feb 23 04:48:51 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:48:51 localhost ceph-mon[296755]: Reconfiguring daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:48:52 localhost podman[299492]: Feb 23 04:48:52 localhost podman[299492]: 2026-02-23 09:48:52.338014729 +0000 UTC m=+0.072426913 container create 193cd3abc468dac38a95534cadfd34c789da6de264e38f4907e4bd688b51426e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_jepsen, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, release=1770267347, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Feb 23 04:48:52 localhost systemd[1]: Started libpod-conmon-193cd3abc468dac38a95534cadfd34c789da6de264e38f4907e4bd688b51426e.scope. Feb 23 04:48:52 localhost systemd[1]: Started libcrun container. Feb 23 04:48:52 localhost podman[299492]: 2026-02-23 09:48:52.406630615 +0000 UTC m=+0.141042789 container init 193cd3abc468dac38a95534cadfd34c789da6de264e38f4907e4bd688b51426e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_jepsen, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., version=7, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, GIT_CLEAN=True, release=1770267347, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:48:52 localhost podman[299492]: 2026-02-23 09:48:52.307863588 +0000 UTC m=+0.042275812 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:48:52 localhost systemd[1]: tmp-crun.PeWx72.mount: Deactivated successfully. Feb 23 04:48:52 localhost podman[299492]: 2026-02-23 09:48:52.421140929 +0000 UTC m=+0.155553103 container start 193cd3abc468dac38a95534cadfd34c789da6de264e38f4907e4bd688b51426e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_jepsen, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:48:52 localhost podman[299492]: 2026-02-23 09:48:52.421444028 +0000 UTC m=+0.155856202 container attach 193cd3abc468dac38a95534cadfd34c789da6de264e38f4907e4bd688b51426e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_jepsen, distribution-scope=public, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, version=7, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, release=1770267347, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:48:52 localhost goofy_jepsen[299507]: 167 167 Feb 23 04:48:52 localhost systemd[1]: libpod-193cd3abc468dac38a95534cadfd34c789da6de264e38f4907e4bd688b51426e.scope: Deactivated successfully. Feb 23 04:48:52 localhost podman[299492]: 2026-02-23 09:48:52.425050848 +0000 UTC m=+0.159463022 container died 193cd3abc468dac38a95534cadfd34c789da6de264e38f4907e4bd688b51426e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_jepsen, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, release=1770267347, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., vcs-type=git, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:48:52 localhost podman[299512]: 2026-02-23 09:48:52.495077837 +0000 UTC m=+0.063182171 container remove 193cd3abc468dac38a95534cadfd34c789da6de264e38f4907e4bd688b51426e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_jepsen, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Feb 23 04:48:52 localhost systemd[1]: libpod-conmon-193cd3abc468dac38a95534cadfd34c789da6de264e38f4907e4bd688b51426e.scope: Deactivated successfully. Feb 23 04:48:52 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:52 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:52 localhost ceph-mon[296755]: Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:48:52 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:48:52 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:48:52 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:48:52 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:52 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:52 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:48:53 localhost podman[299581]: Feb 23 04:48:53 localhost podman[299581]: 2026-02-23 09:48:53.105066421 +0000 UTC m=+0.061197560 container create fda202b54098b8dcdda75ce91add8ec7af3bcf32d367b7c4931def63e12df51d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_lehmann, version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:48:53 localhost systemd[1]: Started libpod-conmon-fda202b54098b8dcdda75ce91add8ec7af3bcf32d367b7c4931def63e12df51d.scope. Feb 23 04:48:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:48:53 localhost systemd[1]: Started libcrun container. Feb 23 04:48:53 localhost podman[299581]: 2026-02-23 09:48:53.17146653 +0000 UTC m=+0.127597669 container init fda202b54098b8dcdda75ce91add8ec7af3bcf32d367b7c4931def63e12df51d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_lehmann, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.42.2, RELEASE=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:48:53 localhost podman[299581]: 2026-02-23 09:48:53.075468737 +0000 UTC m=+0.031599976 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:48:53 localhost podman[299581]: 2026-02-23 09:48:53.179962169 +0000 UTC m=+0.136093308 container start fda202b54098b8dcdda75ce91add8ec7af3bcf32d367b7c4931def63e12df51d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_lehmann, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1770267347, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=rhceph, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:48:53 localhost podman[299581]: 2026-02-23 09:48:53.180946719 +0000 UTC m=+0.137077908 container attach fda202b54098b8dcdda75ce91add8ec7af3bcf32d367b7c4931def63e12df51d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_lehmann, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, io.buildah.version=1.42.2, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, version=7) Feb 23 04:48:53 localhost nice_lehmann[299596]: 167 167 Feb 23 04:48:53 localhost systemd[1]: libpod-fda202b54098b8dcdda75ce91add8ec7af3bcf32d367b7c4931def63e12df51d.scope: Deactivated successfully. Feb 23 04:48:53 localhost podman[299581]: 2026-02-23 09:48:53.184287661 +0000 UTC m=+0.140418850 container died fda202b54098b8dcdda75ce91add8ec7af3bcf32d367b7c4931def63e12df51d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_lehmann, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, release=1770267347, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.42.2, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2026-02-09T10:25:24Z) Feb 23 04:48:53 localhost podman[299598]: 2026-02-23 09:48:53.241654414 +0000 UTC m=+0.082316066 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:48:53 localhost podman[299612]: 2026-02-23 09:48:53.311627142 +0000 UTC m=+0.123114432 container remove fda202b54098b8dcdda75ce91add8ec7af3bcf32d367b7c4931def63e12df51d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_lehmann, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, distribution-scope=public, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:48:53 localhost systemd[1]: libpod-conmon-fda202b54098b8dcdda75ce91add8ec7af3bcf32d367b7c4931def63e12df51d.scope: Deactivated successfully. Feb 23 04:48:53 localhost podman[299598]: 2026-02-23 09:48:53.33189241 +0000 UTC m=+0.172554072 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:48:53 localhost systemd[1]: var-lib-containers-storage-overlay-b0b58220c46da83e289dc8f48309950d84d639fb2886eba69138d700fe047bb3-merged.mount: Deactivated successfully. Feb 23 04:48:53 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:48:53 localhost ceph-mon[296755]: Reconfiguring osd.0 (monmap changed)... Feb 23 04:48:53 localhost ceph-mon[296755]: Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:48:53 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:53 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:53 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:48:54 localhost podman[299701]: Feb 23 04:48:54 localhost podman[299701]: 2026-02-23 09:48:54.128763753 +0000 UTC m=+0.074520747 container create 90eda539edc96e6a6af721e439eda0b300061c3ddd1d5b1d695f5bd4042ff74c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_carver, RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vcs-type=git, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True) Feb 23 04:48:54 localhost systemd[1]: Started libpod-conmon-90eda539edc96e6a6af721e439eda0b300061c3ddd1d5b1d695f5bd4042ff74c.scope. Feb 23 04:48:54 localhost systemd[1]: Started libcrun container. Feb 23 04:48:54 localhost podman[299701]: 2026-02-23 09:48:54.196856293 +0000 UTC m=+0.142613297 container init 90eda539edc96e6a6af721e439eda0b300061c3ddd1d5b1d695f5bd4042ff74c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_carver, version=7, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Feb 23 04:48:54 localhost podman[299701]: 2026-02-23 09:48:54.099361005 +0000 UTC m=+0.045118029 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:48:54 localhost podman[299701]: 2026-02-23 09:48:54.20887595 +0000 UTC m=+0.154632954 container start 90eda539edc96e6a6af721e439eda0b300061c3ddd1d5b1d695f5bd4042ff74c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_carver, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, maintainer=Guillaume Abrioux , version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, GIT_CLEAN=True, vcs-type=git) Feb 23 04:48:54 localhost podman[299701]: 2026-02-23 09:48:54.209321184 +0000 UTC m=+0.155078218 container attach 90eda539edc96e6a6af721e439eda0b300061c3ddd1d5b1d695f5bd4042ff74c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_carver, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, architecture=x86_64, ceph=True) Feb 23 04:48:54 localhost clever_carver[299716]: 167 167 Feb 23 04:48:54 localhost systemd[1]: libpod-90eda539edc96e6a6af721e439eda0b300061c3ddd1d5b1d695f5bd4042ff74c.scope: Deactivated successfully. Feb 23 04:48:54 localhost podman[299701]: 2026-02-23 09:48:54.21281965 +0000 UTC m=+0.158576734 container died 90eda539edc96e6a6af721e439eda0b300061c3ddd1d5b1d695f5bd4042ff74c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_carver, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, release=1770267347, RELEASE=main, name=rhceph, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 04:48:54 localhost podman[299721]: 2026-02-23 09:48:54.301750327 +0000 UTC m=+0.080908052 container remove 90eda539edc96e6a6af721e439eda0b300061c3ddd1d5b1d695f5bd4042ff74c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_carver, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.tags=rhceph ceph, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z) Feb 23 04:48:54 localhost systemd[1]: libpod-conmon-90eda539edc96e6a6af721e439eda0b300061c3ddd1d5b1d695f5bd4042ff74c.scope: Deactivated successfully. Feb 23 04:48:54 localhost systemd[1]: var-lib-containers-storage-overlay-0a1f1fd6b4a3ded4d3acc6f3f18789922f5ed506f38469ddfd1a63f61e3988bf-merged.mount: Deactivated successfully. Feb 23 04:48:54 localhost ceph-mon[296755]: Reconfiguring osd.3 (monmap changed)... Feb 23 04:48:54 localhost ceph-mon[296755]: Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:48:54 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:54 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:54 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:48:54 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:48:55 localhost podman[299797]: Feb 23 04:48:55 localhost podman[299797]: 2026-02-23 09:48:55.101199148 +0000 UTC m=+0.073289999 container create fea5a476dcee95a0d274e4c5668c15c85c5498a33f0375e13d00d9c6aebc2c58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_banzai, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2026-02-09T10:25:24Z, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph) Feb 23 04:48:55 localhost systemd[1]: Started libpod-conmon-fea5a476dcee95a0d274e4c5668c15c85c5498a33f0375e13d00d9c6aebc2c58.scope. Feb 23 04:48:55 localhost systemd[1]: Started libcrun container. Feb 23 04:48:55 localhost podman[299797]: 2026-02-23 09:48:55.070666866 +0000 UTC m=+0.042757747 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:48:55 localhost podman[299797]: 2026-02-23 09:48:55.180717558 +0000 UTC m=+0.152808419 container init fea5a476dcee95a0d274e4c5668c15c85c5498a33f0375e13d00d9c6aebc2c58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_banzai, RELEASE=main, architecture=x86_64, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph) Feb 23 04:48:55 localhost podman[299797]: 2026-02-23 09:48:55.191519788 +0000 UTC m=+0.163610699 container start fea5a476dcee95a0d274e4c5668c15c85c5498a33f0375e13d00d9c6aebc2c58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_banzai, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, CEPH_POINT_RELEASE=, version=7, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., release=1770267347, architecture=x86_64, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:48:55 localhost podman[299797]: 2026-02-23 09:48:55.19223249 +0000 UTC m=+0.164323391 container attach fea5a476dcee95a0d274e4c5668c15c85c5498a33f0375e13d00d9c6aebc2c58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_banzai, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1770267347, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:48:55 localhost quirky_banzai[299812]: 167 167 Feb 23 04:48:55 localhost systemd[1]: libpod-fea5a476dcee95a0d274e4c5668c15c85c5498a33f0375e13d00d9c6aebc2c58.scope: Deactivated successfully. Feb 23 04:48:55 localhost podman[299797]: 2026-02-23 09:48:55.194030544 +0000 UTC m=+0.166121415 container died fea5a476dcee95a0d274e4c5668c15c85c5498a33f0375e13d00d9c6aebc2c58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_banzai, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2026-02-09T10:25:24Z, architecture=x86_64) Feb 23 04:48:55 localhost podman[299817]: 2026-02-23 09:48:55.288557122 +0000 UTC m=+0.085297996 container remove fea5a476dcee95a0d274e4c5668c15c85c5498a33f0375e13d00d9c6aebc2c58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_banzai, GIT_BRANCH=main, ceph=True, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.42.2, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, vcs-type=git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:48:55 localhost systemd[1]: libpod-conmon-fea5a476dcee95a0d274e4c5668c15c85c5498a33f0375e13d00d9c6aebc2c58.scope: Deactivated successfully. Feb 23 04:48:55 localhost ceph-mon[296755]: mon.np0005626465@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054624 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:48:55 localhost systemd[1]: var-lib-containers-storage-overlay-a020d9a44c9ac6b0e55ce679b5429b605fd16a291128c9141b96d4b3e60ef60c-merged.mount: Deactivated successfully. Feb 23 04:48:55 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:48:55 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:48:55 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:55 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:55 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:48:55 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:48:56 localhost podman[299885]: Feb 23 04:48:56 localhost podman[299885]: 2026-02-23 09:48:56.02995522 +0000 UTC m=+0.073400703 container create 3a72434ea853998109dcd229faa4f7059586a1f74ca93ce5423553cb09754b54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_BRANCH=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1770267347, architecture=x86_64, build-date=2026-02-09T10:25:24Z, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:48:56 localhost systemd[1]: Started libpod-conmon-3a72434ea853998109dcd229faa4f7059586a1f74ca93ce5423553cb09754b54.scope. Feb 23 04:48:56 localhost systemd[1]: Started libcrun container. Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:48:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:48:56 localhost podman[299885]: 2026-02-23 09:48:56.098450542 +0000 UTC m=+0.141896025 container init 3a72434ea853998109dcd229faa4f7059586a1f74ca93ce5423553cb09754b54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=1770267347, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 04:48:56 localhost podman[299885]: 2026-02-23 09:48:55.999793499 +0000 UTC m=+0.043239012 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:48:56 localhost podman[299885]: 2026-02-23 09:48:56.109008015 +0000 UTC m=+0.152453488 container start 3a72434ea853998109dcd229faa4f7059586a1f74ca93ce5423553cb09754b54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, version=7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:48:56 localhost podman[299885]: 2026-02-23 09:48:56.109484239 +0000 UTC m=+0.152929762 container attach 3a72434ea853998109dcd229faa4f7059586a1f74ca93ce5423553cb09754b54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, release=1770267347, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, ceph=True) Feb 23 04:48:56 localhost recursing_mclean[299901]: 167 167 Feb 23 04:48:56 localhost systemd[1]: libpod-3a72434ea853998109dcd229faa4f7059586a1f74ca93ce5423553cb09754b54.scope: Deactivated successfully. Feb 23 04:48:56 localhost podman[299885]: 2026-02-23 09:48:56.11147264 +0000 UTC m=+0.154918173 container died 3a72434ea853998109dcd229faa4f7059586a1f74ca93ce5423553cb09754b54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, version=7, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container) Feb 23 04:48:56 localhost podman[299906]: 2026-02-23 09:48:56.197792748 +0000 UTC m=+0.077691395 container remove 3a72434ea853998109dcd229faa4f7059586a1f74ca93ce5423553cb09754b54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Feb 23 04:48:56 localhost systemd[1]: libpod-conmon-3a72434ea853998109dcd229faa4f7059586a1f74ca93ce5423553cb09754b54.scope: Deactivated successfully. Feb 23 04:48:56 localhost systemd[1]: var-lib-containers-storage-overlay-67eee04271bc0d8fd198cff5e15bc9291c364d093ee12ef4c7133ccffb8a2ec5-merged.mount: Deactivated successfully. Feb 23 04:48:56 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:48:56 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:48:56 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:56 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:56 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:48:56 localhost podman[299975]: Feb 23 04:48:56 localhost podman[299975]: 2026-02-23 09:48:56.876992915 +0000 UTC m=+0.075167267 container create 1fce9d870a96b8584cc1995f25365dc8b83b9daafe8939477a40c5b01a0fe3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hawking, io.buildah.version=1.42.2, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 04:48:56 localhost systemd[1]: Started libpod-conmon-1fce9d870a96b8584cc1995f25365dc8b83b9daafe8939477a40c5b01a0fe3ca.scope. Feb 23 04:48:56 localhost systemd[1]: Started libcrun container. Feb 23 04:48:56 localhost podman[299975]: 2026-02-23 09:48:56.937972918 +0000 UTC m=+0.136147210 container init 1fce9d870a96b8584cc1995f25365dc8b83b9daafe8939477a40c5b01a0fe3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hawking, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, RELEASE=main) Feb 23 04:48:56 localhost podman[299975]: 2026-02-23 09:48:56.846506644 +0000 UTC m=+0.044680956 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:48:56 localhost goofy_hawking[299990]: 167 167 Feb 23 04:48:56 localhost systemd[1]: libpod-1fce9d870a96b8584cc1995f25365dc8b83b9daafe8939477a40c5b01a0fe3ca.scope: Deactivated successfully. Feb 23 04:48:56 localhost podman[299975]: 2026-02-23 09:48:56.954604826 +0000 UTC m=+0.152779118 container start 1fce9d870a96b8584cc1995f25365dc8b83b9daafe8939477a40c5b01a0fe3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hawking, GIT_CLEAN=True, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main) Feb 23 04:48:56 localhost podman[299975]: 2026-02-23 09:48:56.955012869 +0000 UTC m=+0.153187221 container attach 1fce9d870a96b8584cc1995f25365dc8b83b9daafe8939477a40c5b01a0fe3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hawking, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, release=1770267347, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, ceph=True, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph) Feb 23 04:48:56 localhost podman[299975]: 2026-02-23 09:48:56.957525236 +0000 UTC m=+0.155699588 container died 1fce9d870a96b8584cc1995f25365dc8b83b9daafe8939477a40c5b01a0fe3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hawking, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=1770267347, architecture=x86_64, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:48:57 localhost podman[299995]: 2026-02-23 09:48:57.04637002 +0000 UTC m=+0.081633345 container remove 1fce9d870a96b8584cc1995f25365dc8b83b9daafe8939477a40c5b01a0fe3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hawking, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=1770267347, GIT_CLEAN=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-type=git) Feb 23 04:48:57 localhost systemd[1]: libpod-conmon-1fce9d870a96b8584cc1995f25365dc8b83b9daafe8939477a40c5b01a0fe3ca.scope: Deactivated successfully. Feb 23 04:48:57 localhost systemd[1]: tmp-crun.dyTiO4.mount: Deactivated successfully. Feb 23 04:48:57 localhost systemd[1]: var-lib-containers-storage-overlay-19c8d36f5cd14efa9c5c8989e290ff6881878d8f8f229d380cb01054bf5442f0-merged.mount: Deactivated successfully. Feb 23 04:48:57 localhost ceph-mon[296755]: Reconfiguring mon.np0005626465 (monmap changed)... Feb 23 04:48:57 localhost ceph-mon[296755]: Reconfiguring daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:48:57 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:57 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:57 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:48:57 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:48:57 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:57 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:59 localhost ceph-mon[296755]: Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:48:59 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:48:59 localhost ceph-mon[296755]: Added label _no_schedule to host np0005626461.localdomain Feb 23 04:48:59 localhost ceph-mon[296755]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005626461.localdomain Feb 23 04:48:59 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:59 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:48:59 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:49:00 localhost ceph-mon[296755]: mon.np0005626465@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:00 localhost ceph-mon[296755]: Reconfiguring osd.1 (monmap changed)... Feb 23 04:49:00 localhost ceph-mon[296755]: Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:49:00 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:00 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:00 localhost ceph-mon[296755]: Reconfiguring osd.4 (monmap changed)... Feb 23 04:49:00 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:49:00 localhost ceph-mon[296755]: Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:49:00 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:00 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626461.localdomain"} : dispatch Feb 23 04:49:00 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626461.localdomain"} : dispatch Feb 23 04:49:00 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005626461.localdomain"}]': finished Feb 23 04:49:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:49:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5064 writes, 22K keys, 5064 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5064 writes, 648 syncs, 7.81 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 28 writes, 104 keys, 28 commit groups, 1.0 writes per commit group, ingest: 0.17 MB, 0.00 MB/s#012Interval WAL: 28 writes, 14 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 04:49:01 localhost sshd[300012]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:49:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:49:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:49:01 localhost ceph-mon[296755]: Removed host np0005626461.localdomain Feb 23 04:49:01 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:01 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:01 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626466.vaywlp (monmap changed)... Feb 23 04:49:01 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:49:01 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:49:01 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626466.vaywlp on np0005626466.localdomain Feb 23 04:49:01 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:01 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:01 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:49:01 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626466.nisqfq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:49:01 localhost systemd[1]: tmp-crun.DCtCTJ.mount: Deactivated successfully. Feb 23 04:49:01 localhost podman[300014]: 2026-02-23 09:49:01.663183425 +0000 UTC m=+0.089946188 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260216, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Feb 23 04:49:01 localhost podman[300014]: 2026-02-23 09:49:01.697846024 +0000 UTC m=+0.124608757 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 23 04:49:01 localhost systemd[1]: tmp-crun.c8r5cl.mount: Deactivated successfully. Feb 23 04:49:01 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:49:01 localhost podman[300015]: 2026-02-23 09:49:01.718688181 +0000 UTC m=+0.144938909 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Feb 23 04:49:01 localhost podman[300015]: 2026-02-23 09:49:01.732890435 +0000 UTC m=+0.159141143 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 23 04:49:01 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:49:01 localhost openstack_network_exporter[243519]: ERROR 09:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:49:01 localhost openstack_network_exporter[243519]: Feb 23 04:49:01 localhost openstack_network_exporter[243519]: ERROR 09:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:49:01 localhost openstack_network_exporter[243519]: Feb 23 04:49:02 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:49:02 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:49:02 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:02 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:02 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:49:03 localhost ceph-mon[296755]: Reconfiguring mon.np0005626466 (monmap changed)... Feb 23 04:49:03 localhost ceph-mon[296755]: Reconfiguring daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:49:03 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:03 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:03 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:49:04 localhost ceph-mon[296755]: mon.np0005626465@2(peon) e13 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:49:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3235845437' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:49:04 localhost ceph-mon[296755]: mon.np0005626465@2(peon) e13 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:49:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3235845437' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:49:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:49:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5926 writes, 25K keys, 5926 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5926 writes, 933 syncs, 6.35 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 276 writes, 752 keys, 276 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s#012Interval WAL: 276 writes, 122 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 04:49:04 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:04 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:04 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:05 localhost ceph-mon[296755]: mon.np0005626465@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:05 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:05 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:05 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:05 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:05 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:05 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:05 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:05 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:05 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:05 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:49:07 localhost podman[300387]: 2026-02-23 09:49:07.003723677 +0000 UTC m=+0.078819769 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:49:07 localhost podman[300387]: 2026-02-23 09:49:07.0132942 +0000 UTC m=+0.088390282 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:49:07 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:49:07 localhost nova_compute[280321]: 2026-02-23 09:49:07.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:07 localhost nova_compute[280321]: 2026-02-23 09:49:07.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 23 04:49:07 localhost nova_compute[280321]: 2026-02-23 09:49:07.910 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 23 04:49:08 localhost nova_compute[280321]: 2026-02-23 09:49:08.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:08 localhost nova_compute[280321]: 2026-02-23 09:49:08.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 23 04:49:08 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:08 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:49:08 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:09 localhost ceph-mon[296755]: Saving service mon spec with placement label:mon Feb 23 04:49:09 localhost ceph-mon[296755]: from='mgr.26614 ' entity='mgr.np0005626463.wtksup' Feb 23 04:49:10 localhost ceph-mon[296755]: mon.np0005626465@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:11 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x5583b1779080 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Feb 23 04:49:11 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 23 04:49:11 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 23 04:49:11 localhost ceph-mon[296755]: mon.np0005626465@2(peon) e14 my rank is now 1 (was 2) Feb 23 04:49:11 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Feb 23 04:49:11 localhost ceph-mgr[285904]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Feb 23 04:49:11 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x5583b8026000 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Feb 23 04:49:11 localhost ceph-mon[296755]: mon.np0005626465@1(probing) e14 handle_auth_request failed to assign global_id Feb 23 04:49:11 localhost ceph-mon[296755]: log_channel(cluster) log [INF] : mon.np0005626465 calling monitor election Feb 23 04:49:11 localhost ceph-mon[296755]: paxos.1).electionLogic(56) init, last seen epoch 56 Feb 23 04:49:11 localhost ceph-mon[296755]: mon.np0005626465@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:49:11 localhost ceph-mon[296755]: mon.np0005626465@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:49:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:49:11 localhost ceph-mon[296755]: Remove daemons mon.np0005626466 Feb 23 04:49:11 localhost ceph-mon[296755]: Safe to remove mon.np0005626466: new quorum should be ['np0005626463', 'np0005626465'] (from ['np0005626463', 'np0005626465']) Feb 23 04:49:11 localhost ceph-mon[296755]: Removing monitor np0005626466 from monmap... Feb 23 04:49:11 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "mon rm", "name": "np0005626466"} : dispatch Feb 23 04:49:11 localhost ceph-mon[296755]: Removing daemon mon.np0005626466 from np0005626466.localdomain -- ports [] Feb 23 04:49:11 localhost ceph-mon[296755]: mon.np0005626465 calling monitor election Feb 23 04:49:11 localhost ceph-mon[296755]: mon.np0005626463 calling monitor election Feb 23 04:49:11 localhost ceph-mon[296755]: mon.np0005626463 is new leader, mons np0005626463,np0005626465 in quorum (ranks 0,1) Feb 23 04:49:11 localhost ceph-mon[296755]: overall HEALTH_OK Feb 23 04:49:12 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:49:12 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:12 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:12 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:12 localhost podman[241086]: time="2026-02-23T09:49:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:49:12 localhost podman[241086]: @ - - [23/Feb/2026:09:49:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:49:12 localhost podman[241086]: @ - - [23/Feb/2026:09:49:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17786 "" "Go-http-client/1.1" Feb 23 04:49:12 localhost nova_compute[280321]: 2026-02-23 09:49:12.901 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:12 localhost nova_compute[280321]: 2026-02-23 09:49:12.922 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:12 localhost nova_compute[280321]: 2026-02-23 09:49:12.945 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:49:12 localhost nova_compute[280321]: 2026-02-23 09:49:12.945 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:49:12 localhost nova_compute[280321]: 2026-02-23 09:49:12.945 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:49:12 localhost nova_compute[280321]: 2026-02-23 09:49:12.946 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:49:12 localhost nova_compute[280321]: 2026-02-23 09:49:12.946 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:49:13 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:13 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:13 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:13 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:13 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e14 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:49:13 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1345472982' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.454 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.669 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.670 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12408MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.670 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.671 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.752 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.752 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.803 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing inventories for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.863 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating ProviderTree inventory for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.863 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.880 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing aggregate associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.904 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing trait associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, traits: HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SHA,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NODE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_ABM,HW_CPU_X86_AESNI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 23 04:49:13 localhost nova_compute[280321]: 2026-02-23 09:49:13.922 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:49:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e14 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:49:14 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/100647062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:49:14 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:14 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:14 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:14 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:14 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:14 localhost ceph-mon[296755]: Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:49:14 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:49:14 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:49:14 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:14 localhost nova_compute[280321]: 2026-02-23 09:49:14.381 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:49:14 localhost nova_compute[280321]: 2026-02-23 09:49:14.387 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:49:14 localhost nova_compute[280321]: 2026-02-23 09:49:14.402 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:49:14 localhost nova_compute[280321]: 2026-02-23 09:49:14.405 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:49:14 localhost nova_compute[280321]: 2026-02-23 09:49:14.405 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:49:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:15 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:15 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:15 localhost ceph-mon[296755]: Reconfiguring osd.2 (monmap changed)... Feb 23 04:49:15 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:49:15 localhost ceph-mon[296755]: Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:49:15 localhost nova_compute[280321]: 2026-02-23 09:49:15.375 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:15 localhost nova_compute[280321]: 2026-02-23 09:49:15.376 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:49:15 localhost nova_compute[280321]: 2026-02-23 09:49:15.376 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:49:15 localhost nova_compute[280321]: 2026-02-23 09:49:15.396 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:49:15 localhost nova_compute[280321]: 2026-02-23 09:49:15.396 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:15 localhost nova_compute[280321]: 2026-02-23 09:49:15.397 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:16 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:16 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:16 localhost ceph-mon[296755]: Reconfiguring osd.5 (monmap changed)... Feb 23 04:49:16 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:49:16 localhost ceph-mon[296755]: Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:49:16 localhost nova_compute[280321]: 2026-02-23 09:49:16.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:16 localhost nova_compute[280321]: 2026-02-23 09:49:16.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:16 localhost nova_compute[280321]: 2026-02-23 09:49:16.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:16 localhost nova_compute[280321]: 2026-02-23 09:49:16.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:16 localhost nova_compute[280321]: 2026-02-23 09:49:16.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:16 localhost nova_compute[280321]: 2026-02-23 09:49:16.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:49:17 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:17 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:17 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:49:17 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:49:17 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:49:17 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:17 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:17 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:49:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:49:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:49:18 localhost podman[300828]: 2026-02-23 09:49:18.695333121 +0000 UTC m=+0.084963717 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:49:18 localhost podman[300828]: 2026-02-23 09:49:18.703583823 +0000 UTC m=+0.093214399 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:49:18 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:49:18 localhost systemd[1]: tmp-crun.NLnz1s.mount: Deactivated successfully. Feb 23 04:49:18 localhost podman[300829]: 2026-02-23 09:49:18.768248878 +0000 UTC m=+0.155673737 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., release=1770267347, managed_by=edpm_ansible, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, config_id=openstack_network_exporter, distribution-scope=public, version=9.7, build-date=2026-02-05T04:57:10Z, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 23 04:49:18 localhost podman[300829]: 2026-02-23 09:49:18.779979226 +0000 UTC m=+0.167404045 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, managed_by=edpm_ansible, vcs-type=git, config_id=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, release=1770267347, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64) Feb 23 04:49:18 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:49:18 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:49:18 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:49:18 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:18 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:18 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:49:19 localhost podman[300906]: Feb 23 04:49:19 localhost podman[300906]: 2026-02-23 09:49:19.127601295 +0000 UTC m=+0.073078833 container create 86fb3a983a69d9836f0d6a500c8ebdc4fd1e53f8c4bde475c8afd3ace6d8f19b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_carver, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:49:19 localhost systemd[1]: Started libpod-conmon-86fb3a983a69d9836f0d6a500c8ebdc4fd1e53f8c4bde475c8afd3ace6d8f19b.scope. Feb 23 04:49:19 localhost systemd[1]: Started libcrun container. Feb 23 04:49:19 localhost podman[300906]: 2026-02-23 09:49:19.186890567 +0000 UTC m=+0.132368105 container init 86fb3a983a69d9836f0d6a500c8ebdc4fd1e53f8c4bde475c8afd3ace6d8f19b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_carver, version=7, RELEASE=main, io.buildah.version=1.42.2, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, release=1770267347, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=) Feb 23 04:49:19 localhost podman[300906]: 2026-02-23 09:49:19.195818859 +0000 UTC m=+0.141296407 container start 86fb3a983a69d9836f0d6a500c8ebdc4fd1e53f8c4bde475c8afd3ace6d8f19b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_carver, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_CLEAN=True) Feb 23 04:49:19 localhost podman[300906]: 2026-02-23 09:49:19.196069457 +0000 UTC m=+0.141546955 container attach 86fb3a983a69d9836f0d6a500c8ebdc4fd1e53f8c4bde475c8afd3ace6d8f19b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_carver, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.buildah.version=1.42.2, ceph=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=) Feb 23 04:49:19 localhost podman[300906]: 2026-02-23 09:49:19.098352491 +0000 UTC m=+0.043830079 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:49:19 localhost recursing_carver[300921]: 167 167 Feb 23 04:49:19 localhost systemd[1]: libpod-86fb3a983a69d9836f0d6a500c8ebdc4fd1e53f8c4bde475c8afd3ace6d8f19b.scope: Deactivated successfully. Feb 23 04:49:19 localhost podman[300906]: 2026-02-23 09:49:19.200113981 +0000 UTC m=+0.145591539 container died 86fb3a983a69d9836f0d6a500c8ebdc4fd1e53f8c4bde475c8afd3ace6d8f19b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_carver, GIT_BRANCH=main, distribution-scope=public, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:49:19 localhost podman[300926]: 2026-02-23 09:49:19.29730531 +0000 UTC m=+0.084394860 container remove 86fb3a983a69d9836f0d6a500c8ebdc4fd1e53f8c4bde475c8afd3ace6d8f19b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_carver, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.openshift.expose-services=, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, maintainer=Guillaume Abrioux ) Feb 23 04:49:19 localhost systemd[1]: libpod-conmon-86fb3a983a69d9836f0d6a500c8ebdc4fd1e53f8c4bde475c8afd3ace6d8f19b.scope: Deactivated successfully. Feb 23 04:49:19 localhost systemd[1]: var-lib-containers-storage-overlay-3aa8466305a29dcf50543787c3a490982ba765cdf2b09d1f4749c8220aa25c6a-merged.mount: Deactivated successfully. Feb 23 04:49:19 localhost ceph-mon[296755]: Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:49:19 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:49:19 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:19 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:19 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:49:20 localhost podman[300994]: Feb 23 04:49:20 localhost podman[300994]: 2026-02-23 09:49:20.041980097 +0000 UTC m=+0.074459265 container create 9b82af41ba2bb6231eae3dbd57ec945b023efc21ed5c71ba0a7ba39fb1738518 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldstine, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:49:20 localhost systemd[1]: Started libpod-conmon-9b82af41ba2bb6231eae3dbd57ec945b023efc21ed5c71ba0a7ba39fb1738518.scope. Feb 23 04:49:20 localhost systemd[1]: Started libcrun container. Feb 23 04:49:20 localhost podman[300994]: 2026-02-23 09:49:20.012457566 +0000 UTC m=+0.044936754 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:49:20 localhost podman[300994]: 2026-02-23 09:49:20.115777762 +0000 UTC m=+0.148256930 container init 9b82af41ba2bb6231eae3dbd57ec945b023efc21ed5c71ba0a7ba39fb1738518 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldstine, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:49:20 localhost podman[300994]: 2026-02-23 09:49:20.12452829 +0000 UTC m=+0.157007458 container start 9b82af41ba2bb6231eae3dbd57ec945b023efc21ed5c71ba0a7ba39fb1738518 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldstine, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, distribution-scope=public, io.buildah.version=1.42.2, ceph=True, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, architecture=x86_64) Feb 23 04:49:20 localhost podman[300994]: 2026-02-23 09:49:20.125484028 +0000 UTC m=+0.157963246 container attach 9b82af41ba2bb6231eae3dbd57ec945b023efc21ed5c71ba0a7ba39fb1738518 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldstine, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux , version=7) Feb 23 04:49:20 localhost thirsty_goldstine[301009]: 167 167 Feb 23 04:49:20 localhost systemd[1]: libpod-9b82af41ba2bb6231eae3dbd57ec945b023efc21ed5c71ba0a7ba39fb1738518.scope: Deactivated successfully. Feb 23 04:49:20 localhost podman[300994]: 2026-02-23 09:49:20.12747509 +0000 UTC m=+0.159954258 container died 9b82af41ba2bb6231eae3dbd57ec945b023efc21ed5c71ba0a7ba39fb1738518 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldstine, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=1770267347, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.expose-services=) Feb 23 04:49:20 localhost podman[301014]: 2026-02-23 09:49:20.215649013 +0000 UTC m=+0.077889670 container remove 9b82af41ba2bb6231eae3dbd57ec945b023efc21ed5c71ba0a7ba39fb1738518 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldstine, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, RELEASE=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, release=1770267347, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:49:20 localhost systemd[1]: libpod-conmon-9b82af41ba2bb6231eae3dbd57ec945b023efc21ed5c71ba0a7ba39fb1738518.scope: Deactivated successfully. Feb 23 04:49:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:20 localhost systemd[1]: var-lib-containers-storage-overlay-a42ab8b56e18ce2e17cee6358ca9ffb0c6145c4f135aa9b0c220d3fed1e7a166-merged.mount: Deactivated successfully. Feb 23 04:49:20 localhost nova_compute[280321]: 2026-02-23 09:49:20.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:20 localhost ceph-mon[296755]: Reconfiguring osd.0 (monmap changed)... Feb 23 04:49:20 localhost ceph-mon[296755]: Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:49:20 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:20 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:20 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:49:21 localhost podman[301092]: Feb 23 04:49:21 localhost podman[301092]: 2026-02-23 09:49:21.112570731 +0000 UTC m=+0.080917451 container create db3ff7ec34e7c09927c26f159bffdb0a814547b40afe664e449f80731fa7d8f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_volhard, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, release=1770267347, ceph=True, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, architecture=x86_64, build-date=2026-02-09T10:25:24Z) Feb 23 04:49:21 localhost systemd[1]: Started libpod-conmon-db3ff7ec34e7c09927c26f159bffdb0a814547b40afe664e449f80731fa7d8f5.scope. Feb 23 04:49:21 localhost systemd[1]: Started libcrun container. Feb 23 04:49:21 localhost podman[301092]: 2026-02-23 09:49:21.078945084 +0000 UTC m=+0.047291794 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:49:21 localhost podman[301092]: 2026-02-23 09:49:21.188507462 +0000 UTC m=+0.156854192 container init db3ff7ec34e7c09927c26f159bffdb0a814547b40afe664e449f80731fa7d8f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_volhard, GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, vendor=Red Hat, Inc., distribution-scope=public) Feb 23 04:49:21 localhost friendly_volhard[301107]: 167 167 Feb 23 04:49:21 localhost podman[301092]: 2026-02-23 09:49:21.200938311 +0000 UTC m=+0.169284981 container start db3ff7ec34e7c09927c26f159bffdb0a814547b40afe664e449f80731fa7d8f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_volhard, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, vcs-type=git, RELEASE=main, ceph=True, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:49:21 localhost podman[301092]: 2026-02-23 09:49:21.201286772 +0000 UTC m=+0.169633422 container attach db3ff7ec34e7c09927c26f159bffdb0a814547b40afe664e449f80731fa7d8f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_volhard, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 04:49:21 localhost systemd[1]: libpod-db3ff7ec34e7c09927c26f159bffdb0a814547b40afe664e449f80731fa7d8f5.scope: Deactivated successfully. Feb 23 04:49:21 localhost podman[301092]: 2026-02-23 09:49:21.203635684 +0000 UTC m=+0.171982394 container died db3ff7ec34e7c09927c26f159bffdb0a814547b40afe664e449f80731fa7d8f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_volhard, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 04:49:21 localhost podman[301112]: 2026-02-23 09:49:21.30533312 +0000 UTC m=+0.088456732 container remove db3ff7ec34e7c09927c26f159bffdb0a814547b40afe664e449f80731fa7d8f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_volhard, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vcs-type=git, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, distribution-scope=public, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc.) Feb 23 04:49:21 localhost systemd[1]: libpod-conmon-db3ff7ec34e7c09927c26f159bffdb0a814547b40afe664e449f80731fa7d8f5.scope: Deactivated successfully. Feb 23 04:49:21 localhost systemd[1]: var-lib-containers-storage-overlay-d75001ae63b86e3e37b6889c22521f9b67cb0b946888398ff111e5a9b38ee976-merged.mount: Deactivated successfully. Feb 23 04:49:21 localhost ceph-mon[296755]: Reconfiguring osd.3 (monmap changed)... Feb 23 04:49:21 localhost ceph-mon[296755]: Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:49:21 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:21 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:21 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:49:22 localhost podman[301188]: Feb 23 04:49:22 localhost podman[301188]: 2026-02-23 09:49:22.133248822 +0000 UTC m=+0.059534050 container create e386f8ca53d22443417d071dbbed69bed8567a5895d537dea9be34a37c85369a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_allen, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, release=1770267347, ceph=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, name=rhceph, version=7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:49:22 localhost systemd[1]: Started libpod-conmon-e386f8ca53d22443417d071dbbed69bed8567a5895d537dea9be34a37c85369a.scope. Feb 23 04:49:22 localhost systemd[1]: Started libcrun container. Feb 23 04:49:22 localhost podman[301188]: 2026-02-23 09:49:22.107197296 +0000 UTC m=+0.033482534 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:49:22 localhost podman[301188]: 2026-02-23 09:49:22.213485842 +0000 UTC m=+0.139771070 container init e386f8ca53d22443417d071dbbed69bed8567a5895d537dea9be34a37c85369a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_allen, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, release=1770267347, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:49:22 localhost podman[301188]: 2026-02-23 09:49:22.225551031 +0000 UTC m=+0.151836229 container start e386f8ca53d22443417d071dbbed69bed8567a5895d537dea9be34a37c85369a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_allen, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:49:22 localhost podman[301188]: 2026-02-23 09:49:22.22583299 +0000 UTC m=+0.152118238 container attach e386f8ca53d22443417d071dbbed69bed8567a5895d537dea9be34a37c85369a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_allen, vcs-type=git, distribution-scope=public, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=7) Feb 23 04:49:22 localhost competent_allen[301203]: 167 167 Feb 23 04:49:22 localhost systemd[1]: libpod-e386f8ca53d22443417d071dbbed69bed8567a5895d537dea9be34a37c85369a.scope: Deactivated successfully. Feb 23 04:49:22 localhost podman[301188]: 2026-02-23 09:49:22.228935655 +0000 UTC m=+0.155220903 container died e386f8ca53d22443417d071dbbed69bed8567a5895d537dea9be34a37c85369a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_allen, build-date=2026-02-09T10:25:24Z, RELEASE=main, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, maintainer=Guillaume Abrioux , name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=) Feb 23 04:49:22 localhost podman[301208]: 2026-02-23 09:49:22.325471713 +0000 UTC m=+0.082197121 container remove e386f8ca53d22443417d071dbbed69bed8567a5895d537dea9be34a37c85369a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_allen, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z) Feb 23 04:49:22 localhost systemd[1]: libpod-conmon-e386f8ca53d22443417d071dbbed69bed8567a5895d537dea9be34a37c85369a.scope: Deactivated successfully. Feb 23 04:49:22 localhost systemd[1]: var-lib-containers-storage-overlay-537200bc037eac14ef1bf5b780aa64df32b4ee660eb20da509cc622c63232bad-merged.mount: Deactivated successfully. Feb 23 04:49:22 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:49:22 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:49:22 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:22 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:22 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:49:22 localhost podman[301276]: Feb 23 04:49:23 localhost podman[301276]: 2026-02-23 09:49:23.004923579 +0000 UTC m=+0.053925248 container create 842a16afeb6cc608d1afcec20ae5f674b5cac9b83453cce8a839d79b595d1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_lamport, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, GIT_CLEAN=True, distribution-scope=public, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z) Feb 23 04:49:23 localhost systemd[1]: Started libpod-conmon-842a16afeb6cc608d1afcec20ae5f674b5cac9b83453cce8a839d79b595d1074.scope. Feb 23 04:49:23 localhost systemd[1]: Started libcrun container. Feb 23 04:49:23 localhost podman[301276]: 2026-02-23 09:49:23.061022633 +0000 UTC m=+0.110024272 container init 842a16afeb6cc608d1afcec20ae5f674b5cac9b83453cce8a839d79b595d1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_lamport, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, architecture=x86_64, GIT_CLEAN=True) Feb 23 04:49:23 localhost gallant_lamport[301291]: 167 167 Feb 23 04:49:23 localhost systemd[1]: libpod-842a16afeb6cc608d1afcec20ae5f674b5cac9b83453cce8a839d79b595d1074.scope: Deactivated successfully. Feb 23 04:49:23 localhost podman[301276]: 2026-02-23 09:49:23.074763273 +0000 UTC m=+0.123764912 container start 842a16afeb6cc608d1afcec20ae5f674b5cac9b83453cce8a839d79b595d1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_lamport, release=1770267347, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True) Feb 23 04:49:23 localhost podman[301276]: 2026-02-23 09:49:23.075016591 +0000 UTC m=+0.124018260 container attach 842a16afeb6cc608d1afcec20ae5f674b5cac9b83453cce8a839d79b595d1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_lamport, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, architecture=x86_64, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Feb 23 04:49:23 localhost podman[301276]: 2026-02-23 09:49:23.077413374 +0000 UTC m=+0.126415083 container died 842a16afeb6cc608d1afcec20ae5f674b5cac9b83453cce8a839d79b595d1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_lamport, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=1770267347, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:49:23 localhost podman[301276]: 2026-02-23 09:49:22.982567927 +0000 UTC m=+0.031569566 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:49:23 localhost podman[301296]: 2026-02-23 09:49:23.167623749 +0000 UTC m=+0.081435058 container remove 842a16afeb6cc608d1afcec20ae5f674b5cac9b83453cce8a839d79b595d1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_lamport, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, release=1770267347, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 04:49:23 localhost systemd[1]: libpod-conmon-842a16afeb6cc608d1afcec20ae5f674b5cac9b83453cce8a839d79b595d1074.scope: Deactivated successfully. Feb 23 04:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:49:23 localhost systemd[1]: var-lib-containers-storage-overlay-b66b5b8e67abace79b396b83c3ed6397b75de0a4d708a1b78e6a2df6b36b7291-merged.mount: Deactivated successfully. Feb 23 04:49:23 localhost podman[301312]: 2026-02-23 09:49:23.778518791 +0000 UTC m=+0.098054566 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller) Feb 23 04:49:23 localhost podman[301312]: 2026-02-23 09:49:23.823307339 +0000 UTC m=+0.142843194 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:49:23 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:49:23 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:49:23 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:49:23 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:23 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:23 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:49:23 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:23 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:49:24 localhost ceph-mon[296755]: Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:49:24 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:49:24 localhost ceph-mon[296755]: Deploying daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:49:24 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:24 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:24 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:49:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:25 localhost ceph-mon[296755]: Reconfiguring osd.1 (monmap changed)... Feb 23 04:49:25 localhost ceph-mon[296755]: Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:49:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 23 04:49:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 23 04:49:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 23 04:49:26 localhost ceph-mgr[285904]: ms_deliver_dispatch: unhandled message 0x5583b8026580 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Feb 23 04:49:26 localhost ceph-mon[296755]: log_channel(cluster) log [INF] : mon.np0005626465 calling monitor election Feb 23 04:49:26 localhost ceph-mon[296755]: paxos.1).electionLogic(58) init, last seen epoch 58 Feb 23 04:49:26 localhost ceph-mon[296755]: mon.np0005626465@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:49:26 localhost ceph-mon[296755]: mon.np0005626465@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:49:31 localhost ceph-mds[284726]: mds.beacon.mds.np0005626465.drvnoy missed beacon ack from the monitors Feb 23 04:49:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:49:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:49:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:49:31 localhost openstack_network_exporter[243519]: ERROR 09:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:49:31 localhost openstack_network_exporter[243519]: Feb 23 04:49:31 localhost openstack_network_exporter[243519]: ERROR 09:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:49:31 localhost openstack_network_exporter[243519]: Feb 23 04:49:32 localhost podman[301337]: 2026-02-23 09:49:32.016278438 +0000 UTC m=+0.094220100 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:49:32 localhost ceph-mon[296755]: mon.np0005626463 calling monitor election Feb 23 04:49:32 localhost ceph-mon[296755]: mon.np0005626465 calling monitor election Feb 23 04:49:32 localhost ceph-mon[296755]: mon.np0005626463 is new leader, mons np0005626463,np0005626465 in quorum (ranks 0,1) Feb 23 04:49:32 localhost ceph-mon[296755]: Health check failed: 1/3 mons down, quorum np0005626463,np0005626465 (MON_DOWN) Feb 23 04:49:32 localhost ceph-mon[296755]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005626463,np0005626465 Feb 23 04:49:32 localhost ceph-mon[296755]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005626463,np0005626465 Feb 23 04:49:32 localhost ceph-mon[296755]: mon.np0005626466 (rank 2) addr [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] is down (out of quorum) Feb 23 04:49:32 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:32 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:32 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:49:32 localhost podman[301337]: 2026-02-23 09:49:32.047286095 +0000 UTC m=+0.125227737 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible) Feb 23 04:49:32 localhost systemd[1]: tmp-crun.OUmvyJ.mount: Deactivated successfully. Feb 23 04:49:32 localhost podman[301338]: 2026-02-23 09:49:32.067192483 +0000 UTC m=+0.140273546 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:49:32 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:49:32 localhost podman[301338]: 2026-02-23 09:49:32.082812799 +0000 UTC m=+0.155893862 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 23 04:49:32 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:49:33 localhost ceph-mon[296755]: Reconfiguring osd.4 (monmap changed)... Feb 23 04:49:33 localhost ceph-mon[296755]: Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:49:33 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:33 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:33 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626466.vaywlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:49:33 localhost ceph-mon[296755]: log_channel(cluster) log [INF] : mon.np0005626465 calling monitor election Feb 23 04:49:33 localhost ceph-mon[296755]: paxos.1).electionLogic(60) init, last seen epoch 60 Feb 23 04:49:33 localhost ceph-mon[296755]: mon.np0005626465@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:49:33 localhost ceph-mon[296755]: mon.np0005626465@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:49:33 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 23 04:49:34 localhost ceph-mon[296755]: mon.np0005626466 calling monitor election Feb 23 04:49:34 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626466.nisqfq (monmap changed)... Feb 23 04:49:34 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626466.nisqfq on np0005626466.localdomain Feb 23 04:49:34 localhost ceph-mon[296755]: mon.np0005626466 calling monitor election Feb 23 04:49:34 localhost ceph-mon[296755]: mon.np0005626463 calling monitor election Feb 23 04:49:34 localhost ceph-mon[296755]: mon.np0005626465 calling monitor election Feb 23 04:49:34 localhost ceph-mon[296755]: mon.np0005626463 is new leader, mons np0005626463,np0005626465,np0005626466 in quorum (ranks 0,1,2) Feb 23 04:49:34 localhost ceph-mon[296755]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005626463,np0005626465) Feb 23 04:49:34 localhost ceph-mon[296755]: Cluster is now healthy Feb 23 04:49:34 localhost ceph-mon[296755]: overall HEALTH_OK Feb 23 04:49:34 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:34 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:36 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:36 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:36 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:49:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:49:37 localhost podman[301599]: 2026-02-23 09:49:37.214141061 +0000 UTC m=+0.089056411 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:49:37 localhost podman[301599]: 2026-02-23 09:49:37.245947103 +0000 UTC m=+0.120862463 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:49:37 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:49:37 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:37 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:37 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:37 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:37 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:37 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:37 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:37 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:37 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:37 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:37 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:37 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Feb 23 04:49:37 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2088209685' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Feb 23 04:49:38 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:38 localhost ceph-mon[296755]: Reconfiguring crash.np0005626463 (monmap changed)... Feb 23 04:49:38 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626463", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:49:38 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626463 on np0005626463.localdomain Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: Reconfiguring osd.2 (monmap changed)... Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 23 04:49:39 localhost ceph-mon[296755]: Reconfiguring daemon osd.2 on np0005626463.localdomain Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:39 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.038239) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840180038284, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2278, "num_deletes": 253, "total_data_size": 4315544, "memory_usage": 4358768, "flush_reason": "Manual Compaction"} Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840180051376, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 2398624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12648, "largest_seqno": 14921, "table_properties": {"data_size": 2389213, "index_size": 5596, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 24674, "raw_average_key_size": 22, "raw_value_size": 2368421, "raw_average_value_size": 2157, "num_data_blocks": 248, "num_entries": 1098, "num_filter_entries": 1098, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840126, "oldest_key_time": 1771840126, "file_creation_time": 1771840180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 13253 microseconds, and 7248 cpu microseconds. Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.051489) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 2398624 bytes OK Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.051521) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.053315) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.053339) EVENT_LOG_v1 {"time_micros": 1771840180053332, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.053362) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 4304349, prev total WAL file size 4320828, number of live WAL files 2. Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.054760) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(2342KB)], [18(17MB)] Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840180054816, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 20317118, "oldest_snapshot_seqno": -1} Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 11639 keys, 16330787 bytes, temperature: kUnknown Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840180140083, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 16330787, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16262559, "index_size": 38047, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29125, "raw_key_size": 312649, "raw_average_key_size": 26, "raw_value_size": 16062055, "raw_average_value_size": 1380, "num_data_blocks": 1452, "num_entries": 11639, "num_filter_entries": 11639, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.140497) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 16330787 bytes Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.142073) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 238.0 rd, 191.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 17.1 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(15.3) write-amplify(6.8) OK, records in: 12178, records dropped: 539 output_compression: NoCompression Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.142107) EVENT_LOG_v1 {"time_micros": 1771840180142090, "job": 8, "event": "compaction_finished", "compaction_time_micros": 85377, "compaction_time_cpu_micros": 50711, "output_level": 6, "num_output_files": 1, "total_output_size": 16330787, "num_input_records": 12178, "num_output_records": 11639, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840180142677, "job": 8, "event": "table_file_deletion", "file_number": 20} Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840180145251, "job": 8, "event": "table_file_deletion", "file_number": 18} Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.054622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.145348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.145355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.145359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.145362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:49:40 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:49:40.145366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:49:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 e85: 6 total, 6 up, 6 in Feb 23 04:49:41 localhost systemd[1]: session-68.scope: Deactivated successfully. Feb 23 04:49:41 localhost systemd[1]: session-68.scope: Consumed 20.624s CPU time. Feb 23 04:49:41 localhost systemd-logind[759]: Session 68 logged out. Waiting for processes to exit. Feb 23 04:49:41 localhost systemd-logind[759]: Removed session 68. Feb 23 04:49:41 localhost ceph-mon[296755]: Reconfig service osd.default_drive_group Feb 23 04:49:41 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:41 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:41 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:41 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' Feb 23 04:49:41 localhost ceph-mon[296755]: from='mgr.26614 172.18.0.106:0/483999791' entity='mgr.np0005626463.wtksup' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:49:41 localhost ceph-mon[296755]: from='client.? 172.18.0.200:0/1175127914' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:49:41 localhost ceph-mon[296755]: Activating manager daemon np0005626466.nisqfq Feb 23 04:49:41 localhost ceph-mon[296755]: from='client.? 172.18.0.200:0/1175127914' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 23 04:49:41 localhost ceph-mon[296755]: Manager daemon np0005626466.nisqfq is now available Feb 23 04:49:41 localhost sshd[301797]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:49:41 localhost systemd-logind[759]: New session 69 of user ceph-admin. Feb 23 04:49:41 localhost systemd[1]: Started Session 69 of User ceph-admin. Feb 23 04:49:42 localhost ceph-mon[296755]: removing stray HostCache host record np0005626461.localdomain.devices.0 Feb 23 04:49:42 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626461.localdomain.devices.0"} : dispatch Feb 23 04:49:42 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626461.localdomain.devices.0"} : dispatch Feb 23 04:49:42 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005626461.localdomain.devices.0"}]': finished Feb 23 04:49:42 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626461.localdomain.devices.0"} : dispatch Feb 23 04:49:42 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005626461.localdomain.devices.0"} : dispatch Feb 23 04:49:42 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005626461.localdomain.devices.0"}]': finished Feb 23 04:49:42 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626466.nisqfq/mirror_snapshot_schedule"} : dispatch Feb 23 04:49:42 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626466.nisqfq/mirror_snapshot_schedule"} : dispatch Feb 23 04:49:42 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626466.nisqfq/trash_purge_schedule"} : dispatch Feb 23 04:49:42 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626466.nisqfq/trash_purge_schedule"} : dispatch Feb 23 04:49:42 localhost podman[301908]: 2026-02-23 09:49:42.504946236 +0000 UTC m=+0.095014253 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, ceph=True, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 04:49:42 localhost podman[301908]: 2026-02-23 09:49:42.643649611 +0000 UTC m=+0.233717608 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:49:42 localhost podman[241086]: time="2026-02-23T09:49:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:49:42 localhost podman[241086]: @ - - [23/Feb/2026:09:49:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:49:42 localhost podman[241086]: @ - - [23/Feb/2026:09:49:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17783 "" "Go-http-client/1.1" Feb 23 04:49:43 localhost nova_compute[280321]: 2026-02-23 09:49:43.679 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:49:44 localhost ceph-mon[296755]: [23/Feb/2026:09:49:43] ENGINE Bus STARTING Feb 23 04:49:44 localhost ceph-mon[296755]: [23/Feb/2026:09:49:43] ENGINE Serving on http://172.18.0.108:8765 Feb 23 04:49:44 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:44 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:44 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:44 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:44 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:44 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:44 localhost ceph-mon[296755]: [23/Feb/2026:09:49:43] ENGINE Serving on https://172.18.0.108:7150 Feb 23 04:49:44 localhost ceph-mon[296755]: [23/Feb/2026:09:49:43] ENGINE Bus STARTED Feb 23 04:49:44 localhost ceph-mon[296755]: [23/Feb/2026:09:49:43] ENGINE Client ('172.18.0.108', 60904) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 23 04:49:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 04:49:45 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 04:49:45 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 04:49:45 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:49:45 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:49:45 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:45 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:45 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:49:45 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:46 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:46 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:49:46 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:49:46 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:49:46 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:49:47 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:49:47 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:49:47 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:49:47 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:47 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:47 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:47 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:47 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:47 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:47 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:49:48.308 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:49:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:49:48.311 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:49:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:49:48.311 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:49:48 localhost ceph-mon[296755]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 23 04:49:48 localhost ceph-mon[296755]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 23 04:49:48 localhost ceph-mon[296755]: Reconfiguring osd.5 (monmap changed)... Feb 23 04:49:48 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 23 04:49:48 localhost ceph-mon[296755]: Reconfiguring daemon osd.5 on np0005626463.localdomain Feb 23 04:49:49 localhost podman[302806]: 2026-02-23 09:49:49.015817986 +0000 UTC m=+0.088557787 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:49:49 localhost podman[302806]: 2026-02-23 09:49:49.026909763 +0000 UTC m=+0.099649584 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:49:49 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:49:49 localhost podman[302807]: 2026-02-23 09:49:49.079388754 +0000 UTC m=+0.149290919 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, managed_by=edpm_ansible, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter) Feb 23 04:49:49 localhost podman[302807]: 2026-02-23 09:49:49.092203632 +0000 UTC m=+0.162105787 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 23 04:49:49 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:49:50 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:50 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:50 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:50 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:50 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626463.qcthuc (monmap changed)... Feb 23 04:49:50 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:49:50 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626463.qcthuc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:49:50 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626463.qcthuc on np0005626463.localdomain Feb 23 04:49:50 localhost sshd[302850]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:49:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:51 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:51 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:51 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626463.wtksup (monmap changed)... Feb 23 04:49:51 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:49:51 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626463.wtksup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:49:51 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626463.wtksup on np0005626463.localdomain Feb 23 04:49:51 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:51 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:51 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:51 localhost podman[302904]: Feb 23 04:49:51 localhost podman[302904]: 2026-02-23 09:49:51.873464304 +0000 UTC m=+0.079068768 container create 879fad1f1da450071abc96d49e5ec5829c82cfbfbc3795e0f81c34a12c918c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_kirch, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, RELEASE=main, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_BRANCH=main, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=) Feb 23 04:49:51 localhost systemd[1]: Started libpod-conmon-879fad1f1da450071abc96d49e5ec5829c82cfbfbc3795e0f81c34a12c918c81.scope. Feb 23 04:49:51 localhost systemd[1]: Started libcrun container. Feb 23 04:49:51 localhost podman[302904]: 2026-02-23 09:49:51.840887666 +0000 UTC m=+0.046492130 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:49:51 localhost podman[302904]: 2026-02-23 09:49:51.951499421 +0000 UTC m=+0.157103865 container init 879fad1f1da450071abc96d49e5ec5829c82cfbfbc3795e0f81c34a12c918c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_kirch, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, CEPH_POINT_RELEASE=) Feb 23 04:49:51 localhost podman[302904]: 2026-02-23 09:49:51.962665729 +0000 UTC m=+0.168270143 container start 879fad1f1da450071abc96d49e5ec5829c82cfbfbc3795e0f81c34a12c918c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_kirch, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=) Feb 23 04:49:51 localhost podman[302904]: 2026-02-23 09:49:51.962970169 +0000 UTC m=+0.168574583 container attach 879fad1f1da450071abc96d49e5ec5829c82cfbfbc3795e0f81c34a12c918c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_kirch, GIT_BRANCH=main, ceph=True, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, name=rhceph) Feb 23 04:49:51 localhost intelligent_kirch[302919]: 167 167 Feb 23 04:49:51 localhost systemd[1]: libpod-879fad1f1da450071abc96d49e5ec5829c82cfbfbc3795e0f81c34a12c918c81.scope: Deactivated successfully. Feb 23 04:49:51 localhost podman[302904]: 2026-02-23 09:49:51.96930342 +0000 UTC m=+0.174907894 container died 879fad1f1da450071abc96d49e5ec5829c82cfbfbc3795e0f81c34a12c918c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_kirch, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 23 04:49:52 localhost podman[302924]: 2026-02-23 09:49:52.070669604 +0000 UTC m=+0.088596437 container remove 879fad1f1da450071abc96d49e5ec5829c82cfbfbc3795e0f81c34a12c918c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_kirch, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=1770267347, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, vcs-type=git, RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , io.openshift.expose-services=, architecture=x86_64, version=7, name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main) Feb 23 04:49:52 localhost systemd[1]: libpod-conmon-879fad1f1da450071abc96d49e5ec5829c82cfbfbc3795e0f81c34a12c918c81.scope: Deactivated successfully. Feb 23 04:49:52 localhost ceph-mon[296755]: Reconfiguring crash.np0005626465 (monmap changed)... Feb 23 04:49:52 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:49:52 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626465", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:49:52 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626465 on np0005626465.localdomain Feb 23 04:49:52 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:52 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:52 localhost systemd[1]: var-lib-containers-storage-overlay-268490df2d4e30cc02b64816d83c677a386fd15ef0c454eb9589a8a6fcd36a88-merged.mount: Deactivated successfully. Feb 23 04:49:53 localhost ceph-mon[296755]: Reconfiguring osd.0 (monmap changed)... Feb 23 04:49:53 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 23 04:49:53 localhost ceph-mon[296755]: Reconfiguring daemon osd.0 on np0005626465.localdomain Feb 23 04:49:53 localhost podman[302994]: Feb 23 04:49:53 localhost podman[302994]: 2026-02-23 09:49:53.686497614 +0000 UTC m=+0.112645477 container create 4201a430189c489241f5172789c9b171cbcfc1966c9a3350e899342fedd60315 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dubinsky, release=1770267347, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, ceph=True, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , name=rhceph, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, distribution-scope=public) Feb 23 04:49:53 localhost systemd[1]: Started libpod-conmon-4201a430189c489241f5172789c9b171cbcfc1966c9a3350e899342fedd60315.scope. Feb 23 04:49:53 localhost podman[302994]: 2026-02-23 09:49:53.650717449 +0000 UTC m=+0.076865322 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:49:53 localhost systemd[1]: Started libcrun container. Feb 23 04:49:53 localhost podman[302994]: 2026-02-23 09:49:53.765750078 +0000 UTC m=+0.191897941 container init 4201a430189c489241f5172789c9b171cbcfc1966c9a3350e899342fedd60315 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dubinsky, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, io.openshift.expose-services=, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 23 04:49:53 localhost podman[302994]: 2026-02-23 09:49:53.777321229 +0000 UTC m=+0.203469112 container start 4201a430189c489241f5172789c9b171cbcfc1966c9a3350e899342fedd60315 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dubinsky, release=1770267347, GIT_CLEAN=True, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Feb 23 04:49:53 localhost podman[302994]: 2026-02-23 09:49:53.777734721 +0000 UTC m=+0.203882624 container attach 4201a430189c489241f5172789c9b171cbcfc1966c9a3350e899342fedd60315 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dubinsky, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, release=1770267347, distribution-scope=public, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, vcs-type=git, maintainer=Guillaume Abrioux ) Feb 23 04:49:53 localhost vigilant_dubinsky[303009]: 167 167 Feb 23 04:49:53 localhost systemd[1]: libpod-4201a430189c489241f5172789c9b171cbcfc1966c9a3350e899342fedd60315.scope: Deactivated successfully. Feb 23 04:49:53 localhost podman[302994]: 2026-02-23 09:49:53.781598818 +0000 UTC m=+0.207746731 container died 4201a430189c489241f5172789c9b171cbcfc1966c9a3350e899342fedd60315 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dubinsky, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main) Feb 23 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:49:53 localhost systemd[1]: var-lib-containers-storage-overlay-a0aee6f5d81c5dde4112d13cab2bbe3c7d3ee3f8df69021059f307c23d41d679-merged.mount: Deactivated successfully. Feb 23 04:49:53 localhost podman[303014]: 2026-02-23 09:49:53.893843342 +0000 UTC m=+0.098033804 container remove 4201a430189c489241f5172789c9b171cbcfc1966c9a3350e899342fedd60315 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dubinsky, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, name=rhceph) Feb 23 04:49:53 localhost systemd[1]: libpod-conmon-4201a430189c489241f5172789c9b171cbcfc1966c9a3350e899342fedd60315.scope: Deactivated successfully. Feb 23 04:49:53 localhost podman[303026]: 2026-02-23 09:49:53.98281968 +0000 UTC m=+0.099809118 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:49:54 localhost podman[303026]: 2026-02-23 09:49:54.052024149 +0000 UTC m=+0.169013567 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Feb 23 04:49:54 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:49:54 localhost podman[303114]: Feb 23 04:49:54 localhost podman[303114]: 2026-02-23 09:49:54.794838725 +0000 UTC m=+0.082258285 container create af3a546afdb734962dbe5f046dcb702de6dc295f0a04929985a60f53e1a218e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldberg, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1770267347) Feb 23 04:49:54 localhost systemd[1]: Started libpod-conmon-af3a546afdb734962dbe5f046dcb702de6dc295f0a04929985a60f53e1a218e2.scope. Feb 23 04:49:54 localhost systemd[1]: Started libcrun container. Feb 23 04:49:54 localhost podman[303114]: 2026-02-23 09:49:54.763149114 +0000 UTC m=+0.050568764 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:49:54 localhost podman[303114]: 2026-02-23 09:49:54.865090865 +0000 UTC m=+0.152510435 container init af3a546afdb734962dbe5f046dcb702de6dc295f0a04929985a60f53e1a218e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldberg, GIT_BRANCH=main, GIT_CLEAN=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:49:54 localhost podman[303114]: 2026-02-23 09:49:54.877994517 +0000 UTC m=+0.165414087 container start af3a546afdb734962dbe5f046dcb702de6dc295f0a04929985a60f53e1a218e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldberg, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, RELEASE=main, io.openshift.tags=rhceph ceph, release=1770267347) Feb 23 04:49:54 localhost podman[303114]: 2026-02-23 09:49:54.878348337 +0000 UTC m=+0.165767907 container attach af3a546afdb734962dbe5f046dcb702de6dc295f0a04929985a60f53e1a218e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldberg, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 23 04:49:54 localhost thirsty_goldberg[303129]: 167 167 Feb 23 04:49:54 localhost systemd[1]: libpod-af3a546afdb734962dbe5f046dcb702de6dc295f0a04929985a60f53e1a218e2.scope: Deactivated successfully. Feb 23 04:49:54 localhost podman[303114]: 2026-02-23 09:49:54.882777911 +0000 UTC m=+0.170197511 container died af3a546afdb734962dbe5f046dcb702de6dc295f0a04929985a60f53e1a218e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldberg, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=rhceph ceph) Feb 23 04:49:54 localhost systemd[1]: var-lib-containers-storage-overlay-9819392b2d8a7294bde13117bfd6594226e5b587cf92d37c8170c6fc0ac5e6f0-merged.mount: Deactivated successfully. Feb 23 04:49:54 localhost podman[303134]: 2026-02-23 09:49:54.980809044 +0000 UTC m=+0.085713470 container remove af3a546afdb734962dbe5f046dcb702de6dc295f0a04929985a60f53e1a218e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goldberg, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., release=1770267347, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, version=7, name=rhceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:49:54 localhost systemd[1]: libpod-conmon-af3a546afdb734962dbe5f046dcb702de6dc295f0a04929985a60f53e1a218e2.scope: Deactivated successfully. Feb 23 04:49:55 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:55 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:55 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:55 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:55 localhost ceph-mon[296755]: Reconfiguring osd.3 (monmap changed)... Feb 23 04:49:55 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 23 04:49:55 localhost ceph-mon[296755]: Reconfiguring daemon osd.3 on np0005626465.localdomain Feb 23 04:49:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:49:55 localhost podman[303212]: Feb 23 04:49:55 localhost podman[303212]: 2026-02-23 09:49:55.858097767 +0000 UTC m=+0.079919534 container create 2985ab7917b2e21bcc1fad1209b11cf4a74e06c5590519c419c7e13b0e8ca09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_haslett, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, release=1770267347, version=7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True) Feb 23 04:49:55 localhost systemd[1]: Started libpod-conmon-2985ab7917b2e21bcc1fad1209b11cf4a74e06c5590519c419c7e13b0e8ca09e.scope. Feb 23 04:49:55 localhost systemd[1]: Started libcrun container. Feb 23 04:49:55 localhost podman[303212]: 2026-02-23 09:49:55.925640136 +0000 UTC m=+0.147461903 container init 2985ab7917b2e21bcc1fad1209b11cf4a74e06c5590519c419c7e13b0e8ca09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_haslett, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, RELEASE=main, vcs-type=git, io.buildah.version=1.42.2, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=) Feb 23 04:49:55 localhost podman[303212]: 2026-02-23 09:49:55.826982134 +0000 UTC m=+0.048803921 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:49:55 localhost podman[303212]: 2026-02-23 09:49:55.935203126 +0000 UTC m=+0.157024893 container start 2985ab7917b2e21bcc1fad1209b11cf4a74e06c5590519c419c7e13b0e8ca09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_haslett, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, release=1770267347, distribution-scope=public) Feb 23 04:49:55 localhost podman[303212]: 2026-02-23 09:49:55.935468784 +0000 UTC m=+0.157290521 container attach 2985ab7917b2e21bcc1fad1209b11cf4a74e06c5590519c419c7e13b0e8ca09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_haslett, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=1770267347, version=7, name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, RELEASE=main, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7) Feb 23 04:49:55 localhost peaceful_haslett[303227]: 167 167 Feb 23 04:49:55 localhost systemd[1]: libpod-2985ab7917b2e21bcc1fad1209b11cf4a74e06c5590519c419c7e13b0e8ca09e.scope: Deactivated successfully. Feb 23 04:49:55 localhost podman[303212]: 2026-02-23 09:49:55.937883557 +0000 UTC m=+0.159705334 container died 2985ab7917b2e21bcc1fad1209b11cf4a74e06c5590519c419c7e13b0e8ca09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_haslett, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, ceph=True, name=rhceph, version=7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat Ceph Storage 7, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:49:56 localhost podman[303232]: 2026-02-23 09:49:56.026491024 +0000 UTC m=+0.082701959 container remove 2985ab7917b2e21bcc1fad1209b11cf4a74e06c5590519c419c7e13b0e8ca09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_haslett, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:49:56 localhost systemd[1]: libpod-conmon-2985ab7917b2e21bcc1fad1209b11cf4a74e06c5590519c419c7e13b0e8ca09e.scope: Deactivated successfully. Feb 23 04:49:56 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:56 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:56 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:56 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:56 localhost ceph-mon[296755]: Reconfiguring mds.mds.np0005626465.drvnoy (monmap changed)... Feb 23 04:49:56 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:49:56 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005626465.drvnoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 23 04:49:56 localhost ceph-mon[296755]: Reconfiguring daemon mds.mds.np0005626465.drvnoy on np0005626465.localdomain Feb 23 04:49:56 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:56 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:56 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:49:56 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005626465.hlpkwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 23 04:49:56 localhost podman[303300]: Feb 23 04:49:56 localhost podman[303300]: 2026-02-23 09:49:56.758737229 +0000 UTC m=+0.060194557 container create e4f615a167b23f0568fa4e8d77fe4dee5a1156d92a99a622ee5a9ad9f5e74c53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_hodgkin, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, release=1770267347, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True) Feb 23 04:49:56 localhost systemd[1]: Started libpod-conmon-e4f615a167b23f0568fa4e8d77fe4dee5a1156d92a99a622ee5a9ad9f5e74c53.scope. Feb 23 04:49:56 localhost systemd[1]: Started libcrun container. Feb 23 04:49:56 localhost podman[303300]: 2026-02-23 09:49:56.819251155 +0000 UTC m=+0.120708513 container init e4f615a167b23f0568fa4e8d77fe4dee5a1156d92a99a622ee5a9ad9f5e74c53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_hodgkin, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, release=1770267347, name=rhceph, build-date=2026-02-09T10:25:24Z, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 23 04:49:56 localhost podman[303300]: 2026-02-23 09:49:56.730267086 +0000 UTC m=+0.031724474 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:49:56 localhost podman[303300]: 2026-02-23 09:49:56.827963039 +0000 UTC m=+0.129420397 container start e4f615a167b23f0568fa4e8d77fe4dee5a1156d92a99a622ee5a9ad9f5e74c53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_hodgkin, io.openshift.expose-services=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=7, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.42.2, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux ) Feb 23 04:49:56 localhost podman[303300]: 2026-02-23 09:49:56.828224676 +0000 UTC m=+0.129682064 container attach e4f615a167b23f0568fa4e8d77fe4dee5a1156d92a99a622ee5a9ad9f5e74c53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_hodgkin, name=rhceph, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, release=1770267347, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 23 04:49:56 localhost elated_hodgkin[303315]: 167 167 Feb 23 04:49:56 localhost systemd[1]: libpod-e4f615a167b23f0568fa4e8d77fe4dee5a1156d92a99a622ee5a9ad9f5e74c53.scope: Deactivated successfully. Feb 23 04:49:56 localhost podman[303300]: 2026-02-23 09:49:56.831516926 +0000 UTC m=+0.132974304 container died e4f615a167b23f0568fa4e8d77fe4dee5a1156d92a99a622ee5a9ad9f5e74c53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_hodgkin, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux , ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True) Feb 23 04:49:56 localhost systemd[1]: var-lib-containers-storage-overlay-c5938442e6ae9710cdf5a4eaf4adb13c0473d82062f5ce4c1f71e34ac7e3c651-merged.mount: Deactivated successfully. Feb 23 04:49:56 localhost systemd[1]: var-lib-containers-storage-overlay-1b71d74130b0a387f892957e7b76b87152b999ac413389dc5fe9757fdf3fd0cd-merged.mount: Deactivated successfully. Feb 23 04:49:56 localhost podman[303320]: 2026-02-23 09:49:56.942624916 +0000 UTC m=+0.101438958 container remove e4f615a167b23f0568fa4e8d77fe4dee5a1156d92a99a622ee5a9ad9f5e74c53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_hodgkin, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, name=rhceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:49:56 localhost systemd[1]: libpod-conmon-e4f615a167b23f0568fa4e8d77fe4dee5a1156d92a99a622ee5a9ad9f5e74c53.scope: Deactivated successfully. Feb 23 04:49:57 localhost ceph-mon[296755]: Reconfiguring mgr.np0005626465.hlpkwo (monmap changed)... Feb 23 04:49:57 localhost ceph-mon[296755]: Reconfiguring daemon mgr.np0005626465.hlpkwo on np0005626465.localdomain Feb 23 04:49:57 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:57 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:57 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:49:57 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005626466", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 23 04:49:58 localhost ceph-mon[296755]: Reconfiguring crash.np0005626466 (monmap changed)... Feb 23 04:49:58 localhost ceph-mon[296755]: Reconfiguring daemon crash.np0005626466 on np0005626466.localdomain Feb 23 04:49:58 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:58 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:58 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:58 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 23 04:49:59 localhost ceph-mon[296755]: Saving service mon spec with placement label:mon Feb 23 04:49:59 localhost ceph-mon[296755]: Reconfiguring osd.1 (monmap changed)... Feb 23 04:49:59 localhost ceph-mon[296755]: Reconfiguring daemon osd.1 on np0005626466.localdomain Feb 23 04:49:59 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:59 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:59 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:49:59 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:00 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 23 04:50:00 localhost ceph-mon[296755]: Reconfiguring daemon osd.4 on np0005626466.localdomain Feb 23 04:50:00 localhost ceph-mon[296755]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 23 04:50:00 localhost ceph-mon[296755]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Feb 23 04:50:00 localhost ceph-mon[296755]: stray daemon mgr.np0005626461.lrfquh on host np0005626461.localdomain not managed by cephadm Feb 23 04:50:00 localhost ceph-mon[296755]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 23 04:50:00 localhost ceph-mon[296755]: stray host np0005626461.localdomain has 1 stray daemons: ['mgr.np0005626461.lrfquh'] Feb 23 04:50:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:01 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:01 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:01 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:01 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:01 localhost ceph-mon[296755]: Reconfiguring mon.np0005626466 (monmap changed)... Feb 23 04:50:01 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:50:01 localhost ceph-mon[296755]: Reconfiguring daemon mon.np0005626466 on np0005626466.localdomain Feb 23 04:50:01 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:01 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:01 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:50:01 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:01 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:01 localhost openstack_network_exporter[243519]: ERROR 09:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:50:01 localhost openstack_network_exporter[243519]: Feb 23 04:50:01 localhost openstack_network_exporter[243519]: ERROR 09:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:50:01 localhost openstack_network_exporter[243519]: Feb 23 04:50:02 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:50:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:50:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:50:02 localhost podman[303373]: 2026-02-23 09:50:02.9823593 +0000 UTC m=+0.086271477 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:50:02 localhost podman[303373]: 2026-02-23 09:50:02.987784444 +0000 UTC m=+0.091696631 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:50:03 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:50:03 localhost podman[303374]: 2026-02-23 09:50:03.032972885 +0000 UTC m=+0.133822810 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Feb 23 04:50:03 localhost podman[303374]: 2026-02-23 09:50:03.071896685 +0000 UTC m=+0.172746560 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:50:03 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:50:03 localhost podman[303441]: Feb 23 04:50:03 localhost podman[303441]: 2026-02-23 09:50:03.364643492 +0000 UTC m=+0.073108668 container create 3a17c470722daf8b3c8301916f6341a8ce8ca7d3336710b5f10218d359058321 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_newton, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=1770267347, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, version=7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:50:03 localhost systemd[1]: Started libpod-conmon-3a17c470722daf8b3c8301916f6341a8ce8ca7d3336710b5f10218d359058321.scope. Feb 23 04:50:03 localhost systemd[1]: Started libcrun container. Feb 23 04:50:03 localhost podman[303441]: 2026-02-23 09:50:03.424483386 +0000 UTC m=+0.132948552 container init 3a17c470722daf8b3c8301916f6341a8ce8ca7d3336710b5f10218d359058321 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_newton, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, release=1770267347, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, ceph=True, vcs-type=git, build-date=2026-02-09T10:25:24Z, distribution-scope=public, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.) Feb 23 04:50:03 localhost podman[303441]: 2026-02-23 09:50:03.334663003 +0000 UTC m=+0.043128189 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 23 04:50:03 localhost podman[303441]: 2026-02-23 09:50:03.43582236 +0000 UTC m=+0.144287536 container start 3a17c470722daf8b3c8301916f6341a8ce8ca7d3336710b5f10218d359058321 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_newton, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.42.2, GIT_CLEAN=True, CEPH_POINT_RELEASE=) Feb 23 04:50:03 localhost jolly_newton[303457]: 167 167 Feb 23 04:50:03 localhost systemd[1]: libpod-3a17c470722daf8b3c8301916f6341a8ce8ca7d3336710b5f10218d359058321.scope: Deactivated successfully. Feb 23 04:50:03 localhost podman[303441]: 2026-02-23 09:50:03.436269125 +0000 UTC m=+0.144734301 container attach 3a17c470722daf8b3c8301916f6341a8ce8ca7d3336710b5f10218d359058321 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_newton, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, RELEASE=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, ceph=True) Feb 23 04:50:03 localhost podman[303441]: 2026-02-23 09:50:03.440116011 +0000 UTC m=+0.148581237 container died 3a17c470722daf8b3c8301916f6341a8ce8ca7d3336710b5f10218d359058321 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_newton, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., version=7, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=) Feb 23 04:50:03 localhost podman[303462]: 2026-02-23 09:50:03.527213012 +0000 UTC m=+0.081414590 container remove 3a17c470722daf8b3c8301916f6341a8ce8ca7d3336710b5f10218d359058321 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_newton, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, vendor=Red Hat, Inc., ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, RELEASE=main, GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2) Feb 23 04:50:03 localhost systemd[1]: libpod-conmon-3a17c470722daf8b3c8301916f6341a8ce8ca7d3336710b5f10218d359058321.scope: Deactivated successfully. Feb 23 04:50:03 localhost ceph-mon[296755]: Reconfiguring mon.np0005626463 (monmap changed)... Feb 23 04:50:03 localhost ceph-mon[296755]: Reconfiguring daemon mon.np0005626463 on np0005626463.localdomain Feb 23 04:50:03 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:03 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:03 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 23 04:50:03 localhost systemd[1]: var-lib-containers-storage-overlay-97a31c8464ce57fc38b7249a5d5b681cd1e055083a1b14496d19a7dab0afcb32-merged.mount: Deactivated successfully. Feb 23 04:50:04 localhost ceph-mon[296755]: Reconfiguring mon.np0005626465 (monmap changed)... Feb 23 04:50:04 localhost ceph-mon[296755]: Reconfiguring daemon mon.np0005626465 on np0005626465.localdomain Feb 23 04:50:04 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:04 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:07 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:50:07 localhost podman[303479]: 2026-02-23 09:50:07.763694332 +0000 UTC m=+0.080044028 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:50:07 localhost podman[303479]: 2026-02-23 09:50:07.778824601 +0000 UTC m=+0.095174337 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:50:07 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:50:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:12 localhost podman[241086]: time="2026-02-23T09:50:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:50:12 localhost podman[241086]: @ - - [23/Feb/2026:09:50:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:50:12 localhost podman[241086]: @ - - [23/Feb/2026:09:50:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17783 "" "Go-http-client/1.1" Feb 23 04:50:12 localhost nova_compute[280321]: 2026-02-23 09:50:12.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:50:12 localhost nova_compute[280321]: 2026-02-23 09:50:12.913 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:50:12 localhost nova_compute[280321]: 2026-02-23 09:50:12.914 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:50:12 localhost nova_compute[280321]: 2026-02-23 09:50:12.914 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:50:12 localhost nova_compute[280321]: 2026-02-23 09:50:12.914 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:50:12 localhost nova_compute[280321]: 2026-02-23 09:50:12.915 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:50:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:50:13 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1942676297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:50:13 localhost nova_compute[280321]: 2026-02-23 09:50:13.328 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:50:13 localhost nova_compute[280321]: 2026-02-23 09:50:13.527 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:50:13 localhost nova_compute[280321]: 2026-02-23 09:50:13.528 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12415MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:50:13 localhost nova_compute[280321]: 2026-02-23 09:50:13.528 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:50:13 localhost nova_compute[280321]: 2026-02-23 09:50:13.528 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:50:13 localhost nova_compute[280321]: 2026-02-23 09:50:13.673 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:50:13 localhost nova_compute[280321]: 2026-02-23 09:50:13.673 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:50:13 localhost nova_compute[280321]: 2026-02-23 09:50:13.692 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:50:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:50:14 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1873174007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:50:14 localhost nova_compute[280321]: 2026-02-23 09:50:14.102 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:50:14 localhost nova_compute[280321]: 2026-02-23 09:50:14.108 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:50:14 localhost nova_compute[280321]: 2026-02-23 09:50:14.140 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:50:14 localhost nova_compute[280321]: 2026-02-23 09:50:14.142 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:50:14 localhost nova_compute[280321]: 2026-02-23 09:50:14.143 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:50:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:16 localhost nova_compute[280321]: 2026-02-23 09:50:16.145 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:50:16 localhost nova_compute[280321]: 2026-02-23 09:50:16.146 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:50:16 localhost nova_compute[280321]: 2026-02-23 09:50:16.146 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:50:16 localhost nova_compute[280321]: 2026-02-23 09:50:16.166 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:50:16 localhost nova_compute[280321]: 2026-02-23 09:50:16.167 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:50:16 localhost nova_compute[280321]: 2026-02-23 09:50:16.167 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:50:16 localhost nova_compute[280321]: 2026-02-23 09:50:16.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:50:16 localhost nova_compute[280321]: 2026-02-23 09:50:16.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:50:16 localhost nova_compute[280321]: 2026-02-23 09:50:16.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:50:17 localhost nova_compute[280321]: 2026-02-23 09:50:17.889 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:50:17 localhost nova_compute[280321]: 2026-02-23 09:50:17.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:50:18 localhost nova_compute[280321]: 2026-02-23 09:50:18.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:50:19 localhost sshd[303546]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:50:20 localhost podman[303547]: 2026-02-23 09:50:20.015219784 +0000 UTC m=+0.088038381 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:50:20 localhost podman[303547]: 2026-02-23 09:50:20.029499157 +0000 UTC m=+0.102317744 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:50:20 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:50:20 localhost podman[303548]: 2026-02-23 09:50:20.116001321 +0000 UTC m=+0.186260300 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1770267347, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9/ubi-minimal, version=9.7, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 23 04:50:20 localhost podman[303548]: 2026-02-23 09:50:20.153677933 +0000 UTC m=+0.223936902 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, version=9.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, vcs-type=git, config_id=openstack_network_exporter, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible) Feb 23 04:50:20 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:50:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:50:24 localhost systemd[1]: tmp-crun.mfMvqy.mount: Deactivated successfully. Feb 23 04:50:24 localhost podman[303590]: 2026-02-23 09:50:24.973162064 +0000 UTC m=+0.053679200 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:50:25 localhost podman[303590]: 2026-02-23 09:50:25.040526636 +0000 UTC m=+0.121043762 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:50:25 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:50:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:31 localhost openstack_network_exporter[243519]: ERROR 09:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:50:31 localhost openstack_network_exporter[243519]: Feb 23 04:50:31 localhost openstack_network_exporter[243519]: ERROR 09:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:50:31 localhost openstack_network_exporter[243519]: Feb 23 04:50:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:50:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:50:34 localhost podman[303615]: 2026-02-23 09:50:34.007809708 +0000 UTC m=+0.080237035 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:50:34 localhost podman[303616]: 2026-02-23 09:50:34.064725403 +0000 UTC m=+0.133626993 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216) Feb 23 04:50:34 localhost podman[303616]: 2026-02-23 09:50:34.074275222 +0000 UTC m=+0.143176752 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute) Feb 23 04:50:34 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:50:34 localhost podman[303615]: 2026-02-23 09:50:34.088968369 +0000 UTC m=+0.161395716 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:50:34 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:50:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:36 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Feb 23 04:50:36 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2126719983' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Feb 23 04:50:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:50:37 localhost systemd[1]: tmp-crun.ARbqM2.mount: Deactivated successfully. Feb 23 04:50:37 localhost podman[303652]: 2026-02-23 09:50:37.998937047 +0000 UTC m=+0.075909353 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:50:38 localhost podman[303652]: 2026-02-23 09:50:38.009871639 +0000 UTC m=+0.086843895 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:50:38 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:50:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:41 localhost sshd[303676]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:50:42 localhost podman[241086]: time="2026-02-23T09:50:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:50:42 localhost podman[241086]: @ - - [23/Feb/2026:09:50:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:50:42 localhost podman[241086]: @ - - [23/Feb/2026:09:50:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17784 "" "Go-http-client/1.1" Feb 23 04:50:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:50:48.309 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:50:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:50:48.309 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:50:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:50:48.310 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:50:49 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) Feb 23 04:50:49 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/4257872446' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch Feb 23 04:50:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:50:51 localhost podman[303679]: 2026-02-23 09:50:51.007689824 +0000 UTC m=+0.079624307 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, version=9.7, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, architecture=x86_64, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 04:50:51 localhost podman[303679]: 2026-02-23 09:50:51.023848394 +0000 UTC m=+0.095782927 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1770267347, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 23 04:50:51 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:50:51 localhost systemd[1]: tmp-crun.mifbt6.mount: Deactivated successfully. Feb 23 04:50:51 localhost podman[303678]: 2026-02-23 09:50:51.123013631 +0000 UTC m=+0.197949565 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:50:51 localhost podman[303678]: 2026-02-23 09:50:51.133801258 +0000 UTC m=+0.208737152 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:50:51 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:50:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:50:56 localhost podman[303722]: 2026-02-23 09:50:56.001887472 +0000 UTC m=+0.081548654 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 23 04:50:56 localhost podman[303722]: 2026-02-23 09:50:56.070998467 +0000 UTC m=+0.150659639 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 23 04:50:56 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.089 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:50:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:50:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:51:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:01 localhost openstack_network_exporter[243519]: ERROR 09:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:51:01 localhost openstack_network_exporter[243519]: Feb 23 04:51:01 localhost openstack_network_exporter[243519]: ERROR 09:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:51:01 localhost openstack_network_exporter[243519]: Feb 23 04:51:02 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 23 04:51:02 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/3625719757' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4258538050' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4258538050' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:51:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:51:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:51:04 localhost podman[303833]: 2026-02-23 09:51:04.868145799 +0000 UTC m=+0.077869632 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0) Feb 23 04:51:04 localhost podman[303833]: 2026-02-23 09:51:04.878861543 +0000 UTC m=+0.088585376 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 04:51:04 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:51:04 localhost systemd[1]: tmp-crun.VwLdDh.mount: Deactivated successfully. Feb 23 04:51:04 localhost podman[303834]: 2026-02-23 09:51:04.922450406 +0000 UTC m=+0.129772016 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:51:04 localhost podman[303834]: 2026-02-23 09:51:04.93778705 +0000 UTC m=+0.145108700 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute) Feb 23 04:51:04 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 e86: 6 total, 6 up, 6 in Feb 23 04:51:04 localhost ceph-mgr[285904]: mgr handle_mgr_map Activating! Feb 23 04:51:04 localhost ceph-mgr[285904]: mgr handle_mgr_map I am now activating Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626463"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626463"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626465"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626465"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005626466"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata", "id": "np0005626466"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005626465.drvnoy"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mds metadata", "who": "mds.np0005626465.drvnoy"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon).mds e17 all = 0 Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005626466.vaywlp"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mds metadata", "who": "mds.np0005626466.vaywlp"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon).mds e17 all = 0 Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005626463.qcthuc"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mds metadata", "who": "mds.np0005626463.qcthuc"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon).mds e17 all = 0 Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626465.hlpkwo", "id": "np0005626465.hlpkwo"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr metadata", "who": "np0005626465.hlpkwo", "id": "np0005626465.hlpkwo"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626463.wtksup", "id": "np0005626463.wtksup"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr metadata", "who": "np0005626463.wtksup", "id": "np0005626463.wtksup"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 0} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 1} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 2} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 3} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 4} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata", "id": 5} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mds metadata"} v 0) Feb 23 04:51:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mds metadata"} : dispatch Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon).mds e17 all = 1 Feb 23 04:51:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd metadata"} v 0) Feb 23 04:51:05 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd metadata"} : dispatch Feb 23 04:51:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon metadata"} v 0) Feb 23 04:51:05 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mon metadata"} : dispatch Feb 23 04:51:05 localhost ceph-mgr[285904]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: balancer Feb 23 04:51:05 localhost ceph-mgr[285904]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: [balancer INFO root] Starting Feb 23 04:51:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:51:05 Feb 23 04:51:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:51:05 localhost ceph-mgr[285904]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Feb 23 04:51:05 localhost systemd-logind[759]: Session 69 logged out. Waiting for processes to exit. Feb 23 04:51:05 localhost systemd[1]: session-69.scope: Deactivated successfully. Feb 23 04:51:05 localhost systemd[1]: session-69.scope: Consumed 12.872s CPU time. Feb 23 04:51:05 localhost systemd-logind[759]: Removed session 69. Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: cephadm Feb 23 04:51:05 localhost ceph-mgr[285904]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: crash Feb 23 04:51:05 localhost ceph-mgr[285904]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: devicehealth Feb 23 04:51:05 localhost ceph-mgr[285904]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: iostat Feb 23 04:51:05 localhost ceph-mgr[285904]: [devicehealth INFO root] Starting Feb 23 04:51:05 localhost ceph-mgr[285904]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: nfs Feb 23 04:51:05 localhost ceph-mgr[285904]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: orchestrator Feb 23 04:51:05 localhost ceph-mgr[285904]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: pg_autoscaler Feb 23 04:51:05 localhost ceph-mgr[285904]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: progress Feb 23 04:51:05 localhost ceph-mgr[285904]: [progress INFO root] Loading... Feb 23 04:51:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:51:05 localhost ceph-mgr[285904]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: [progress INFO root] Loaded OSDMap, ready. Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] recovery thread starting Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] starting setup Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: rbd_support Feb 23 04:51:05 localhost ceph-mgr[285904]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: restful Feb 23 04:51:05 localhost ceph-mgr[285904]: [restful INFO root] server_addr: :: server_port: 8003 Feb 23 04:51:05 localhost ceph-mgr[285904]: [restful WARNING root] server not running: no certificate configured Feb 23 04:51:05 localhost ceph-mgr[285904]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: status Feb 23 04:51:05 localhost ceph-mgr[285904]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: telemetry Feb 23 04:51:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/mirror_snapshot_schedule"} v 0) Feb 23 04:51:05 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/mirror_snapshot_schedule"} : dispatch Feb 23 04:51:05 localhost ceph-mgr[285904]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 04:51:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 04:51:05 localhost ceph-mgr[285904]: mgr load Constructed class from module: volumes Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] PerfHandler: starting Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_task_task: vms, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:51:05.121+0000 7fc3bbcb0640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:51:05.121+0000 7fc3bbcb0640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:51:05.121+0000 7fc3bbcb0640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:51:05.121+0000 7fc3bbcb0640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:51:05.121+0000 7fc3bbcb0640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:51:05.123+0000 7fc3bf4b7640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:51:05.123+0000 7fc3bf4b7640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:51:05.123+0000 7fc3bf4b7640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:51:05.123+0000 7fc3bf4b7640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T09:51:05.123+0000 7fc3bf4b7640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_task_task: volumes, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_task_task: images, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_task_task: backups, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TaskHandler: starting Feb 23 04:51:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/trash_purge_schedule"} v 0) Feb 23 04:51:05 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/trash_purge_schedule"} : dispatch Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Feb 23 04:51:05 localhost ceph-mgr[285904]: [rbd_support INFO root] setup complete Feb 23 04:51:05 localhost sshd[304009]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:51:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:05 localhost systemd-logind[759]: New session 70 of user ceph-admin. Feb 23 04:51:05 localhost systemd[1]: Started Session 70 of User ceph-admin. Feb 23 04:51:05 localhost ceph-mon[296755]: from='mgr.26638 172.18.0.108:0/2769928129' entity='mgr.np0005626466.nisqfq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:51:05 localhost ceph-mon[296755]: from='mgr.26638 ' entity='mgr.np0005626466.nisqfq' Feb 23 04:51:05 localhost ceph-mon[296755]: from='client.? 172.18.0.200:0/2649255566' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:51:05 localhost ceph-mon[296755]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 23 04:51:05 localhost ceph-mon[296755]: Activating manager daemon np0005626465.hlpkwo Feb 23 04:51:05 localhost ceph-mon[296755]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 23 04:51:05 localhost ceph-mon[296755]: Manager daemon np0005626465.hlpkwo is now available Feb 23 04:51:05 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/mirror_snapshot_schedule"} : dispatch Feb 23 04:51:05 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/mirror_snapshot_schedule"} : dispatch Feb 23 04:51:05 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/trash_purge_schedule"} : dispatch Feb 23 04:51:05 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005626465.hlpkwo/trash_purge_schedule"} : dispatch Feb 23 04:51:06 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:06 localhost podman[304123]: 2026-02-23 09:51:06.388998159 +0000 UTC m=+0.096906210 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=) Feb 23 04:51:06 localhost podman[304123]: 2026-02-23 09:51:06.492793516 +0000 UTC m=+0.200701567 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, name=rhceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 04:51:06 localhost ceph-mgr[285904]: [cephadm INFO cherrypy.error] [23/Feb/2026:09:51:06] ENGINE Bus STARTING Feb 23 04:51:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : [23/Feb/2026:09:51:06] ENGINE Bus STARTING Feb 23 04:51:06 localhost ceph-mgr[285904]: [cephadm INFO cherrypy.error] [23/Feb/2026:09:51:06] ENGINE Serving on http://172.18.0.107:8765 Feb 23 04:51:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : [23/Feb/2026:09:51:06] ENGINE Serving on http://172.18.0.107:8765 Feb 23 04:51:06 localhost ceph-mgr[285904]: [cephadm INFO cherrypy.error] [23/Feb/2026:09:51:06] ENGINE Serving on https://172.18.0.107:7150 Feb 23 04:51:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : [23/Feb/2026:09:51:06] ENGINE Serving on https://172.18.0.107:7150 Feb 23 04:51:06 localhost ceph-mgr[285904]: [cephadm INFO cherrypy.error] [23/Feb/2026:09:51:06] ENGINE Bus STARTED Feb 23 04:51:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : [23/Feb/2026:09:51:06] ENGINE Bus STARTED Feb 23 04:51:06 localhost ceph-mgr[285904]: [cephadm INFO cherrypy.error] [23/Feb/2026:09:51:06] ENGINE Client ('172.18.0.107', 34908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 23 04:51:06 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : [23/Feb/2026:09:51:06] ENGINE Client ('172.18.0.107', 34908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 23 04:51:06 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:51:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:51:07 localhost ceph-mgr[285904]: [devicehealth INFO root] Check health Feb 23 04:51:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:51:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:51:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:51:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:51:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:51:08 localhost podman[304376]: 2026-02-23 09:51:08.176203524 +0000 UTC m=+0.080112549 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:51:08 localhost podman[304376]: 2026-02-23 09:51:08.186952171 +0000 UTC m=+0.090861216 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:51:08 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:51:08 localhost ceph-mon[296755]: [23/Feb/2026:09:51:06] ENGINE Bus STARTING Feb 23 04:51:08 localhost ceph-mon[296755]: [23/Feb/2026:09:51:06] ENGINE Serving on http://172.18.0.107:8765 Feb 23 04:51:08 localhost ceph-mon[296755]: [23/Feb/2026:09:51:06] ENGINE Serving on https://172.18.0.107:7150 Feb 23 04:51:08 localhost ceph-mon[296755]: [23/Feb/2026:09:51:06] ENGINE Bus STARTED Feb 23 04:51:08 localhost ceph-mon[296755]: [23/Feb/2026:09:51:06] ENGINE Client ('172.18.0.107', 34908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 23 04:51:08 localhost ceph-mon[296755]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 23 04:51:08 localhost ceph-mon[296755]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 23 04:51:08 localhost ceph-mon[296755]: Cluster is now healthy Feb 23 04:51:08 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:08 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:08 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:08 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:08 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:08 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 23 04:51:08 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 23 04:51:08 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:51:08 localhost ceph-mgr[285904]: [cephadm INFO root] Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 04:51:08 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 04:51:08 localhost ceph-mgr[285904]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:51:08 localhost ceph-mgr[285904]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 23 04:51:08 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 23 04:51:08 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:51:08 localhost ceph-mgr[285904]: [cephadm INFO root] Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 04:51:08 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 04:51:08 localhost ceph-mgr[285904]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:51:08 localhost ceph-mgr[285904]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:51:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:51:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:09 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 23 04:51:09 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 23 04:51:09 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mgr[285904]: [cephadm INFO root] Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 04:51:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 04:51:09 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 04:51:09 localhost ceph-mgr[285904]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 04:51:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 04:51:09 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:51:09 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:51:09 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:51:09 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 04:51:09 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 04:51:09 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 04:51:09 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 04:51:09 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:51:09 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.conf Feb 23 04:51:09 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.conf Feb 23 04:51:09 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:51:09 localhost ceph-mgr[285904]: mgr.server handle_open ignoring open from mgr.np0005626466.nisqfq 172.18.0.108:0/2913197188; not ready for session (expect reconnect) Feb 23 04:51:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:10 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:51:10 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:51:10 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:51:10 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:51:10 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:51:10 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:51:10 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:51:10 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:51:10 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.conf Feb 23 04:51:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005626466.nisqfq", "id": "np0005626466.nisqfq"} v 0) Feb 23 04:51:10 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "mgr metadata", "who": "np0005626466.nisqfq", "id": "np0005626466.nisqfq"} : dispatch Feb 23 04:51:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:11 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mgr[285904]: [cephadm INFO cephadm.serve] Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mon[296755]: Updating np0005626465.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mon[296755]: Updating np0005626463.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mon[296755]: Updating np0005626466.localdomain:/var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/config/ceph.client.admin.keyring Feb 23 04:51:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:51:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:51:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:51:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:51:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:51:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:51:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:51:11 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 3116dd56-6a52-4032-8ffc-6ffaa9a13f5b (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:51:11 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 3116dd56-6a52-4032-8ffc-6ffaa9a13f5b (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:51:11 localhost ceph-mgr[285904]: [progress INFO root] Completed event 3116dd56-6a52-4032-8ffc-6ffaa9a13f5b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 04:51:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:51:11 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:51:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:51:12 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:51:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:51:12 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:51:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:51:12 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 1e570cd6-34da-4d29-b706-56aa712c9c35 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:51:12 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 1e570cd6-34da-4d29-b706-56aa712c9c35 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:51:12 localhost ceph-mgr[285904]: [progress INFO root] Completed event 1e570cd6-34da-4d29-b706-56aa712c9c35 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 04:51:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:51:12 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:51:12 localhost podman[241086]: time="2026-02-23T09:51:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:51:12 localhost podman[241086]: @ - - [23/Feb/2026:09:51:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:51:12 localhost podman[241086]: @ - - [23/Feb/2026:09:51:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17791 "" "Go-http-client/1.1" Feb 23 04:51:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:12 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:51:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:12 localhost nova_compute[280321]: 2026-02-23 09:51:12.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:51:12 localhost nova_compute[280321]: 2026-02-23 09:51:12.915 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:51:12 localhost nova_compute[280321]: 2026-02-23 09:51:12.915 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:51:12 localhost nova_compute[280321]: 2026-02-23 09:51:12.916 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:51:12 localhost nova_compute[280321]: 2026-02-23 09:51:12.916 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:51:12 localhost nova_compute[280321]: 2026-02-23 09:51:12.916 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:51:12 localhost sshd[305095]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:51:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 0 B/s wr, 16 op/s Feb 23 04:51:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:51:13 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/476270285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:51:13 localhost nova_compute[280321]: 2026-02-23 09:51:13.328 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:51:13 localhost nova_compute[280321]: 2026-02-23 09:51:13.472 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:51:13 localhost nova_compute[280321]: 2026-02-23 09:51:13.473 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12370MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:51:13 localhost nova_compute[280321]: 2026-02-23 09:51:13.473 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:51:13 localhost nova_compute[280321]: 2026-02-23 09:51:13.474 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:51:13 localhost nova_compute[280321]: 2026-02-23 09:51:13.532 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:51:13 localhost nova_compute[280321]: 2026-02-23 09:51:13.532 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:51:13 localhost nova_compute[280321]: 2026-02-23 09:51:13.547 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:51:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:51:13 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3609414126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:51:13 localhost nova_compute[280321]: 2026-02-23 09:51:13.982 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:51:13 localhost nova_compute[280321]: 2026-02-23 09:51:13.988 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:51:14 localhost nova_compute[280321]: 2026-02-23 09:51:14.016 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:51:14 localhost nova_compute[280321]: 2026-02-23 09:51:14.018 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:51:14 localhost nova_compute[280321]: 2026-02-23 09:51:14.018 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:51:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Feb 23 04:51:15 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:51:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:51:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:16 localhost nova_compute[280321]: 2026-02-23 09:51:16.020 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:51:16 localhost nova_compute[280321]: 2026-02-23 09:51:16.044 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:51:16 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:51:16 localhost nova_compute[280321]: 2026-02-23 09:51:16.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:51:16 localhost nova_compute[280321]: 2026-02-23 09:51:16.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:51:16 localhost nova_compute[280321]: 2026-02-23 09:51:16.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:51:16 localhost nova_compute[280321]: 2026-02-23 09:51:16.909 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:51:16 localhost nova_compute[280321]: 2026-02-23 09:51:16.909 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:51:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Feb 23 04:51:17 localhost nova_compute[280321]: 2026-02-23 09:51:17.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:51:17 localhost nova_compute[280321]: 2026-02-23 09:51:17.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:51:18 localhost nova_compute[280321]: 2026-02-23 09:51:18.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:51:18 localhost nova_compute[280321]: 2026-02-23 09:51:18.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:51:18 localhost nova_compute[280321]: 2026-02-23 09:51:18.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:51:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 23 04:51:19 localhost nova_compute[280321]: 2026-02-23 09:51:19.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:51:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 23 04:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:51:21 localhost systemd[1]: tmp-crun.SVoQf0.mount: Deactivated successfully. Feb 23 04:51:21 localhost podman[305149]: 2026-02-23 09:51:21.27423837 +0000 UTC m=+0.096385873 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:51:21 localhost podman[305149]: 2026-02-23 09:51:21.283742458 +0000 UTC m=+0.105889991 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:51:21 localhost podman[305140]: 2026-02-23 09:51:21.243456497 +0000 UTC m=+0.100531910 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.7) Feb 23 04:51:21 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:51:21 localhost podman[305140]: 2026-02-23 09:51:21.32699732 +0000 UTC m=+0.184072723 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, version=9.7, vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal) Feb 23 04:51:21 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:51:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 23 04:51:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:25 localhost sshd[305185]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:51:26 localhost podman[305187]: 2026-02-23 09:51:26.415683074 +0000 UTC m=+0.088968048 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Feb 23 04:51:26 localhost podman[305187]: 2026-02-23 09:51:26.454229893 +0000 UTC m=+0.127514867 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:51:26 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:51:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.282255) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840287282301, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2700, "num_deletes": 256, "total_data_size": 9936816, "memory_usage": 10419904, "flush_reason": "Manual Compaction"} Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840287304595, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 6126102, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14926, "largest_seqno": 17621, "table_properties": {"data_size": 6115243, "index_size": 6788, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26255, "raw_average_key_size": 22, "raw_value_size": 6092245, "raw_average_value_size": 5145, "num_data_blocks": 292, "num_entries": 1184, "num_filter_entries": 1184, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840180, "oldest_key_time": 1771840180, "file_creation_time": 1771840287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 22411 microseconds, and 10751 cpu microseconds. Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.304660) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 6126102 bytes OK Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.304692) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.306682) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.306704) EVENT_LOG_v1 {"time_micros": 1771840287306698, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.306729) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 9923984, prev total WAL file size 9923984, number of live WAL files 2. Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.308357) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(5982KB)], [21(15MB)] Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840287308454, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 22456889, "oldest_snapshot_seqno": -1} Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 12283 keys, 19493146 bytes, temperature: kUnknown Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840287398630, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 19493146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19420106, "index_size": 41259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 327917, "raw_average_key_size": 26, "raw_value_size": 19207758, "raw_average_value_size": 1563, "num_data_blocks": 1588, "num_entries": 12283, "num_filter_entries": 12283, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.398979) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 19493146 bytes Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.400532) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 248.7 rd, 215.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.8, 15.6 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 12823, records dropped: 540 output_compression: NoCompression Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.400554) EVENT_LOG_v1 {"time_micros": 1771840287400545, "job": 10, "event": "compaction_finished", "compaction_time_micros": 90285, "compaction_time_cpu_micros": 51836, "output_level": 6, "num_output_files": 1, "total_output_size": 19493146, "num_input_records": 12823, "num_output_records": 12283, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840287401162, "job": 10, "event": "table_file_deletion", "file_number": 23} Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840287402705, "job": 10, "event": "table_file_deletion", "file_number": 21} Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.308238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.402830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.402838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.402841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.402844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:27 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:27.402847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:29 localhost sshd[305213]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:51:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:31 localhost openstack_network_exporter[243519]: ERROR 09:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:51:31 localhost openstack_network_exporter[243519]: Feb 23 04:51:31 localhost openstack_network_exporter[243519]: ERROR 09:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:51:31 localhost openstack_network_exporter[243519]: Feb 23 04:51:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:33 localhost sshd[305215]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:51:35 localhost systemd[1]: tmp-crun.F1pDR7.mount: Deactivated successfully. Feb 23 04:51:35 localhost podman[305217]: 2026-02-23 09:51:35.009544342 +0000 UTC m=+0.084529494 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:51:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:51:35 localhost podman[305217]: 2026-02-23 09:51:35.046931825 +0000 UTC m=+0.121916977 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent) Feb 23 04:51:35 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:51:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:51:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:51:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:51:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:51:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:51:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:51:35 localhost podman[305234]: 2026-02-23 09:51:35.113127803 +0000 UTC m=+0.084432171 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:51:35 localhost podman[305234]: 2026-02-23 09:51:35.127938592 +0000 UTC m=+0.099242970 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:51:35 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:51:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.248486) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840296248528, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 333, "num_deletes": 251, "total_data_size": 133633, "memory_usage": 140208, "flush_reason": "Manual Compaction"} Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840296251819, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 86555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17627, "largest_seqno": 17954, "table_properties": {"data_size": 84483, "index_size": 247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5806, "raw_average_key_size": 20, "raw_value_size": 80359, "raw_average_value_size": 280, "num_data_blocks": 11, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840288, "oldest_key_time": 1771840288, "file_creation_time": 1771840296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 3380 microseconds, and 1225 cpu microseconds. Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.251866) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 86555 bytes OK Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.251891) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.254093) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.254117) EVENT_LOG_v1 {"time_micros": 1771840296254110, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.254137) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 131315, prev total WAL file size 131639, number of live WAL files 2. Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.255926) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373532' seq:72057594037927935, type:22 .. '6D6772737461740034303034' seq:0, type:0; will stop at (end) Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(84KB)], [24(18MB)] Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840296255969, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 19579701, "oldest_snapshot_seqno": -1} Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 12054 keys, 17350629 bytes, temperature: kUnknown Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840296339425, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 17350629, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17283944, "index_size": 35480, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 323351, "raw_average_key_size": 26, "raw_value_size": 17080379, "raw_average_value_size": 1416, "num_data_blocks": 1346, "num_entries": 12054, "num_filter_entries": 12054, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.339951) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 17350629 bytes Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.342161) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 233.8 rd, 207.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 18.6 +0.0 blob) out(16.5 +0.0 blob), read-write-amplify(426.7) write-amplify(200.5) OK, records in: 12569, records dropped: 515 output_compression: NoCompression Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.342194) EVENT_LOG_v1 {"time_micros": 1771840296342180, "job": 12, "event": "compaction_finished", "compaction_time_micros": 83757, "compaction_time_cpu_micros": 47556, "output_level": 6, "num_output_files": 1, "total_output_size": 17350629, "num_input_records": 12569, "num_output_records": 12054, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840296342347, "job": 12, "event": "table_file_deletion", "file_number": 26} Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840296344827, "job": 12, "event": "table_file_deletion", "file_number": 24} Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.255854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.344932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.344939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.344942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.344945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:36 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:51:36.344948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:51:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:51:39 localhost podman[305251]: 2026-02-23 09:51:38.999759564 +0000 UTC m=+0.074021605 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:51:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:39 localhost podman[305251]: 2026-02-23 09:51:39.0365376 +0000 UTC m=+0.110799631 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:51:39 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:51:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:42 localhost podman[241086]: time="2026-02-23T09:51:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:51:42 localhost podman[241086]: @ - - [23/Feb/2026:09:51:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:51:42 localhost podman[241086]: @ - - [23/Feb/2026:09:51:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17792 "" "Go-http-client/1.1" Feb 23 04:51:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:51:48.310 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:51:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:51:48.310 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:51:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:51:48.310 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:51:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:51:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:51:52 localhost podman[305274]: 2026-02-23 09:51:52.004603165 +0000 UTC m=+0.078996028 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:51:52 localhost podman[305274]: 2026-02-23 09:51:52.016745763 +0000 UTC m=+0.091138586 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:51:52 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:51:52 localhost podman[305275]: 2026-02-23 09:51:52.14331159 +0000 UTC m=+0.215122164 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1770267347) Feb 23 04:51:52 localhost podman[305275]: 2026-02-23 09:51:52.15978588 +0000 UTC m=+0.231596444 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1770267347, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container) Feb 23 04:51:52 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:51:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:51:55 localhost sshd[305318]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:51:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:51:57 localhost podman[305320]: 2026-02-23 09:51:56.999982149 +0000 UTC m=+0.078681627 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:51:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:51:57 localhost podman[305320]: 2026-02-23 09:51:57.060820904 +0000 UTC m=+0.139520332 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260216) Feb 23 04:51:57 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:51:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:01 localhost openstack_network_exporter[243519]: ERROR 09:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:52:01 localhost openstack_network_exporter[243519]: Feb 23 04:52:01 localhost openstack_network_exporter[243519]: ERROR 09:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:52:01 localhost openstack_network_exporter[243519]: Feb 23 04:52:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:52:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1652933032' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:52:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:52:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1652933032' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:52:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:52:05 Feb 23 04:52:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:52:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 04:52:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['backups', 'volumes', 'manila_data', 'manila_metadata', '.mgr', 'vms', 'images'] Feb 23 04:52:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 04:52:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:52:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16) Feb 23 04:52:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:52:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:52:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:52:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:52:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:52:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:52:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:52:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:52:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:52:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:52:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:52:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:52:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:52:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:52:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:52:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:52:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:52:06 localhost podman[305345]: 2026-02-23 09:52:06.005481169 +0000 UTC m=+0.079271146 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:52:06 localhost podman[305345]: 2026-02-23 09:52:06.015855593 +0000 UTC m=+0.089645580 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0) Feb 23 04:52:06 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:52:06 localhost podman[305346]: 2026-02-23 09:52:06.066655093 +0000 UTC m=+0.135702726 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:52:06 localhost podman[305346]: 2026-02-23 09:52:06.078980388 +0000 UTC m=+0.148028061 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS) Feb 23 04:52:06 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:52:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:52:10 localhost podman[305382]: 2026-02-23 09:52:10.009675235 +0000 UTC m=+0.084018099 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:52:10 localhost podman[305382]: 2026-02-23 09:52:10.024170795 +0000 UTC m=+0.098513649 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:52:10 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:52:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:12 localhost podman[241086]: time="2026-02-23T09:52:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:52:12 localhost podman[241086]: @ - - [23/Feb/2026:09:52:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:52:12 localhost podman[241086]: @ - - [23/Feb/2026:09:52:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17792 "" "Go-http-client/1.1" Feb 23 04:52:12 localhost nova_compute[280321]: 2026-02-23 09:52:12.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:52:12 localhost nova_compute[280321]: 2026-02-23 09:52:12.908 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:52:12 localhost nova_compute[280321]: 2026-02-23 09:52:12.908 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:52:12 localhost nova_compute[280321]: 2026-02-23 09:52:12.908 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:52:12 localhost nova_compute[280321]: 2026-02-23 09:52:12.908 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:52:12 localhost nova_compute[280321]: 2026-02-23 09:52:12.909 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:52:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:52:13 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4174244319' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:52:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:52:13 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:52:13 localhost nova_compute[280321]: 2026-02-23 09:52:13.361 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:52:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:52:13 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:52:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:52:13 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev ea3ead62-9dbe-4f99-8964-91be8ccf1fbf (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:52:13 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev ea3ead62-9dbe-4f99-8964-91be8ccf1fbf (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:52:13 localhost ceph-mgr[285904]: [progress INFO root] Completed event ea3ead62-9dbe-4f99-8964-91be8ccf1fbf (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 04:52:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:52:13 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:52:13 localhost nova_compute[280321]: 2026-02-23 09:52:13.567 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:52:13 localhost nova_compute[280321]: 2026-02-23 09:52:13.569 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12356MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:52:13 localhost nova_compute[280321]: 2026-02-23 09:52:13.570 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:52:13 localhost nova_compute[280321]: 2026-02-23 09:52:13.571 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:52:13 localhost nova_compute[280321]: 2026-02-23 09:52:13.632 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:52:13 localhost nova_compute[280321]: 2026-02-23 09:52:13.632 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:52:13 localhost nova_compute[280321]: 2026-02-23 09:52:13.651 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:52:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:52:14 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/807576527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:52:14 localhost nova_compute[280321]: 2026-02-23 09:52:14.082 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:52:14 localhost nova_compute[280321]: 2026-02-23 09:52:14.089 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:52:14 localhost nova_compute[280321]: 2026-02-23 09:52:14.115 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:52:14 localhost nova_compute[280321]: 2026-02-23 09:52:14.117 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:52:14 localhost nova_compute[280321]: 2026-02-23 09:52:14.118 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:52:14 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:52:14 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:52:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:15 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:52:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:52:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:16 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:52:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:17 localhost nova_compute[280321]: 2026-02-23 09:52:17.119 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:52:17 localhost nova_compute[280321]: 2026-02-23 09:52:17.120 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:52:17 localhost nova_compute[280321]: 2026-02-23 09:52:17.120 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:52:17 localhost nova_compute[280321]: 2026-02-23 09:52:17.147 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:52:17 localhost nova_compute[280321]: 2026-02-23 09:52:17.147 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:52:17 localhost nova_compute[280321]: 2026-02-23 09:52:17.147 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:52:18 localhost nova_compute[280321]: 2026-02-23 09:52:18.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:52:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:19 localhost nova_compute[280321]: 2026-02-23 09:52:19.888 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:52:19 localhost nova_compute[280321]: 2026-02-23 09:52:19.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:52:19 localhost nova_compute[280321]: 2026-02-23 09:52:19.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:52:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:20 localhost nova_compute[280321]: 2026-02-23 09:52:20.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:52:20 localhost nova_compute[280321]: 2026-02-23 09:52:20.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:52:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:52:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:52:23 localhost podman[305535]: 2026-02-23 09:52:23.02095953 +0000 UTC m=+0.091352041 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, config_id=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, name=ubi9/ubi-minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter) Feb 23 04:52:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:23 localhost podman[305534]: 2026-02-23 09:52:23.070954696 +0000 UTC m=+0.140676227 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:52:23 localhost podman[305535]: 2026-02-23 09:52:23.09021696 +0000 UTC m=+0.160609491 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.7, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1770267347, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64) Feb 23 04:52:23 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:52:23 localhost podman[305534]: 2026-02-23 09:52:23.109927838 +0000 UTC m=+0.179649389 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:52:23 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:52:23 localhost sshd[305578]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:52:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:52:27 localhost systemd[1]: tmp-crun.GVAXH2.mount: Deactivated successfully. Feb 23 04:52:28 localhost podman[305580]: 2026-02-23 09:52:27.999733861 +0000 UTC m=+0.078728589 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:52:28 localhost podman[305580]: 2026-02-23 09:52:28.064957348 +0000 UTC m=+0.143952046 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:52:28 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:52:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:30 localhost ovn_metadata_agent[161837]: 2026-02-23 09:52:30.300 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:52:30 localhost ovn_metadata_agent[161837]: 2026-02-23 09:52:30.302 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:52:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:31 localhost openstack_network_exporter[243519]: ERROR 09:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:52:31 localhost openstack_network_exporter[243519]: Feb 23 04:52:31 localhost openstack_network_exporter[243519]: ERROR 09:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:52:31 localhost openstack_network_exporter[243519]: Feb 23 04:52:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:52:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:52:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:52:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 23 04:52:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 23 04:52:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:52:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 23 04:52:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 23 04:52:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e87 e87: 6 total, 6 up, 6 in Feb 23 04:52:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:52:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:52:37 localhost podman[305605]: 2026-02-23 09:52:37.013463291 +0000 UTC m=+0.083329077 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:52:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 133 MiB data, 653 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 2.8 MiB/s wr, 29 op/s Feb 23 04:52:37 localhost systemd[1]: tmp-crun.PeaG1x.mount: Deactivated successfully. Feb 23 04:52:37 localhost podman[305604]: 2026-02-23 09:52:37.061045504 +0000 UTC m=+0.134627884 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:52:37 localhost podman[305605]: 2026-02-23 09:52:37.077542584 +0000 UTC m=+0.147408340 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:52:37 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:52:37 localhost podman[305604]: 2026-02-23 09:52:37.097831079 +0000 UTC m=+0.171413449 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:52:37 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:52:37 localhost sshd[305641]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:52:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 e88: 6 total, 6 up, 6 in Feb 23 04:52:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v52: 177 pgs: 177 active+clean; 133 MiB data, 653 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 3.6 MiB/s wr, 36 op/s Feb 23 04:52:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:52:40.305 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:52:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:52:41 localhost podman[305644]: 2026-02-23 09:52:41.002660253 +0000 UTC m=+0.080957056 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:52:41 localhost podman[305644]: 2026-02-23 09:52:41.014479102 +0000 UTC m=+0.092775905 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:52:41 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:52:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 145 MiB data, 694 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s Feb 23 04:52:42 localhost podman[241086]: time="2026-02-23T09:52:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:52:42 localhost podman[241086]: @ - - [23/Feb/2026:09:52:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:52:42 localhost podman[241086]: @ - - [23/Feb/2026:09:52:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17795 "" "Go-http-client/1.1" Feb 23 04:52:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v54: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s Feb 23 04:52:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v55: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 4.1 MiB/s wr, 38 op/s Feb 23 04:52:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v56: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 6.2 KiB/s rd, 1.2 MiB/s wr, 8 op/s Feb 23 04:52:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:52:48.311 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:52:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:52:48.312 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:52:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:52:48.312 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:52:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 5.3 KiB/s rd, 1.1 MiB/s wr, 7 op/s Feb 23 04:52:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v58: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 5.2 KiB/s rd, 1.0 MiB/s wr, 7 op/s Feb 23 04:52:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:52:53 localhost podman[305668]: 2026-02-23 09:52:53.988677153 +0000 UTC m=+0.063140356 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:52:53 localhost podman[305668]: 2026-02-23 09:52:53.996504631 +0000 UTC m=+0.070967824 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:52:54 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:52:54 localhost podman[305669]: 2026-02-23 09:52:54.05618974 +0000 UTC m=+0.125200488 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, vendor=Red Hat, Inc., version=9.7, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1770267347, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, vcs-type=git) Feb 23 04:52:54 localhost podman[305669]: 2026-02-23 09:52:54.092835942 +0000 UTC m=+0.161846660 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_id=openstack_network_exporter) Feb 23 04:52:54 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:52:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:52:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:52:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:52:59 localhost podman[305710]: 2026-02-23 09:52:59.014677496 +0000 UTC m=+0.080603996 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:52:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:52:59 localhost podman[305710]: 2026-02-23 09:52:59.074911642 +0000 UTC m=+0.140838142 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:52:59 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:53:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:01 localhost openstack_network_exporter[243519]: ERROR 09:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:53:01 localhost openstack_network_exporter[243519]: Feb 23 04:53:01 localhost openstack_network_exporter[243519]: ERROR 09:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:53:01 localhost openstack_network_exporter[243519]: Feb 23 04:53:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:53:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2214862826' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:53:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:53:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2214862826' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:53:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:53:05 Feb 23 04:53:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:53:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 04:53:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['backups', 'images', '.mgr', 'manila_metadata', 'manila_data', 'vms', 'volumes'] Feb 23 04:53:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 04:53:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:53:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:53:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Feb 23 04:53:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:53:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:53:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:53:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:53:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:53:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:53:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:53:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:53:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:53:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:53:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:53:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:53:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:53:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:53:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:53:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:53:07 localhost podman[305736]: 2026-02-23 09:53:07.774280489 +0000 UTC m=+0.091762624 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:53:07 localhost podman[305737]: 2026-02-23 09:53:07.831141963 +0000 UTC m=+0.144935107 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:53:07 localhost podman[305737]: 2026-02-23 09:53:07.845147027 +0000 UTC m=+0.158940231 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 04:53:07 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:53:07 localhost podman[305736]: 2026-02-23 09:53:07.85875763 +0000 UTC m=+0.176239815 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:53:07 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:53:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:08.536 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:53:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:08.539 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:53:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:09 localhost nova_compute[280321]: 2026-02-23 09:53:09.229 280325 DEBUG oslo_concurrency.processutils [None req-ae330fa7-4597-4831-9f8b-a2161aaf9d38 cd2e50eb3ab545e7a4a297466702b532 852a560db9b64ac19f6eceea5b2255b8 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:53:09 localhost nova_compute[280321]: 2026-02-23 09:53:09.255 280325 DEBUG oslo_concurrency.processutils [None req-ae330fa7-4597-4831-9f8b-a2161aaf9d38 cd2e50eb3ab545e7a4a297466702b532 852a560db9b64ac19f6eceea5b2255b8 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:53:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.321955) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840391322003, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1309, "num_deletes": 256, "total_data_size": 1933755, "memory_usage": 2012544, "flush_reason": "Manual Compaction"} Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840391329719, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1267672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17959, "largest_seqno": 19263, "table_properties": {"data_size": 1262533, "index_size": 2610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11142, "raw_average_key_size": 19, "raw_value_size": 1251999, "raw_average_value_size": 2188, "num_data_blocks": 116, "num_entries": 572, "num_filter_entries": 572, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840296, "oldest_key_time": 1771840296, "file_creation_time": 1771840391, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 7817 microseconds, and 4128 cpu microseconds. Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.329770) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1267672 bytes OK Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.329798) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.331419) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.331473) EVENT_LOG_v1 {"time_micros": 1771840391331467, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.331494) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1927492, prev total WAL file size 1927816, number of live WAL files 2. Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.332178) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373638' seq:72057594037927935, type:22 .. '6C6F676D0034303230' seq:0, type:0; will stop at (end) Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1237KB)], [27(16MB)] Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840391332217, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 18618301, "oldest_snapshot_seqno": -1} Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12090 keys, 18506992 bytes, temperature: kUnknown Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840391394032, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 18506992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18437983, "index_size": 37676, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 325082, "raw_average_key_size": 26, "raw_value_size": 18231743, "raw_average_value_size": 1508, "num_data_blocks": 1437, "num_entries": 12090, "num_filter_entries": 12090, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840391, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.394498) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 18506992 bytes Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.396224) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 300.5 rd, 298.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 16.5 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(29.3) write-amplify(14.6) OK, records in: 12626, records dropped: 536 output_compression: NoCompression Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.396290) EVENT_LOG_v1 {"time_micros": 1771840391396276, "job": 14, "event": "compaction_finished", "compaction_time_micros": 61957, "compaction_time_cpu_micros": 33203, "output_level": 6, "num_output_files": 1, "total_output_size": 18506992, "num_input_records": 12626, "num_output_records": 12090, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840391396905, "job": 14, "event": "table_file_deletion", "file_number": 29} Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840391400479, "job": 14, "event": "table_file_deletion", "file_number": 27} Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.332087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.400624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.400631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.400633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.400635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:53:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:53:11.400637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:53:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:53:12 localhost systemd[1]: tmp-crun.QEpym1.mount: Deactivated successfully. Feb 23 04:53:12 localhost podman[305773]: 2026-02-23 09:53:12.004778597 +0000 UTC m=+0.077010557 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:53:12 localhost podman[305773]: 2026-02-23 09:53:12.014844303 +0000 UTC m=+0.087076223 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:53:12 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:53:12 localhost podman[241086]: time="2026-02-23T09:53:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:53:12 localhost sshd[305797]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:53:12 localhost podman[241086]: @ - - [23/Feb/2026:09:53:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:53:12 localhost podman[241086]: @ - - [23/Feb/2026:09:53:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17787 "" "Go-http-client/1.1" Feb 23 04:53:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 04:53:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 04:53:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 04:53:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 04:53:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 04:53:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 04:53:14 localhost nova_compute[280321]: 2026-02-23 09:53:14.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:53:14 localhost nova_compute[280321]: 2026-02-23 09:53:14.916 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:53:14 localhost nova_compute[280321]: 2026-02-23 09:53:14.917 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:53:14 localhost nova_compute[280321]: 2026-02-23 09:53:14.917 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:53:14 localhost nova_compute[280321]: 2026-02-23 09:53:14.918 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:53:14 localhost nova_compute[280321]: 2026-02-23 09:53:14.919 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:53:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:53:15 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:53:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:53:15 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:53:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:53:15 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev b7e88c22-b4ad-4012-a6c2-c3c9ae76aa8c (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:53:15 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev b7e88c22-b4ad-4012-a6c2-c3c9ae76aa8c (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:53:15 localhost ceph-mgr[285904]: [progress INFO root] Completed event b7e88c22-b4ad-4012-a6c2-c3c9ae76aa8c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 04:53:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:53:15 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:53:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:53:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:53:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:53:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:53:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:53:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:53:15 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:53:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:53:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:53:15 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4261639450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:53:15 localhost nova_compute[280321]: 2026-02-23 09:53:15.412 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:53:15 localhost nova_compute[280321]: 2026-02-23 09:53:15.621 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:53:15 localhost nova_compute[280321]: 2026-02-23 09:53:15.623 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=12346MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:53:15 localhost nova_compute[280321]: 2026-02-23 09:53:15.624 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:53:15 localhost nova_compute[280321]: 2026-02-23 09:53:15.625 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:53:15 localhost nova_compute[280321]: 2026-02-23 09:53:15.700 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:53:15 localhost nova_compute[280321]: 2026-02-23 09:53:15.701 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:53:15 localhost nova_compute[280321]: 2026-02-23 09:53:15.723 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:53:16 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:53:16 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2906954021' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:53:16 localhost nova_compute[280321]: 2026-02-23 09:53:16.191 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:53:16 localhost nova_compute[280321]: 2026-02-23 09:53:16.197 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:53:16 localhost nova_compute[280321]: 2026-02-23 09:53:16.233 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:53:16 localhost nova_compute[280321]: 2026-02-23 09:53:16.236 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:53:16 localhost nova_compute[280321]: 2026-02-23 09:53:16.236 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:53:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:16.541 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:53:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:18 localhost sshd[305985]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:53:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:19 localhost nova_compute[280321]: 2026-02-23 09:53:19.238 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:53:19 localhost nova_compute[280321]: 2026-02-23 09:53:19.255 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:53:19 localhost nova_compute[280321]: 2026-02-23 09:53:19.255 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:53:19 localhost nova_compute[280321]: 2026-02-23 09:53:19.255 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:53:19 localhost nova_compute[280321]: 2026-02-23 09:53:19.268 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:53:19 localhost nova_compute[280321]: 2026-02-23 09:53:19.268 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:53:19 localhost nova_compute[280321]: 2026-02-23 09:53:19.269 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:53:20 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:53:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:53:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:20 localhost nova_compute[280321]: 2026-02-23 09:53:20.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:53:20 localhost nova_compute[280321]: 2026-02-23 09:53:20.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:53:20 localhost nova_compute[280321]: 2026-02-23 09:53:20.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:53:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:21 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:53:21 localhost nova_compute[280321]: 2026-02-23 09:53:21.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:53:21 localhost nova_compute[280321]: 2026-02-23 09:53:21.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:53:21 localhost nova_compute[280321]: 2026-02-23 09:53:21.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:53:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:53:25 localhost podman[305987]: 2026-02-23 09:53:25.013187837 +0000 UTC m=+0.084598257 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:53:25 localhost podman[305987]: 2026-02-23 09:53:25.028738658 +0000 UTC m=+0.100149088 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:53:25 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:53:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:25 localhost podman[305988]: 2026-02-23 09:53:25.11714754 +0000 UTC m=+0.188001222 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal) Feb 23 04:53:25 localhost podman[305988]: 2026-02-23 09:53:25.130751611 +0000 UTC m=+0.201605293 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, io.openshift.expose-services=, name=ubi9/ubi-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:53:25 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:53:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 23 04:53:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:53:30 localhost systemd[1]: tmp-crun.DXkmms.mount: Deactivated successfully. Feb 23 04:53:30 localhost podman[306028]: 2026-02-23 09:53:30.006336673 +0000 UTC m=+0.083374809 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.43.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:53:30 localhost podman[306028]: 2026-02-23 09:53:30.078675737 +0000 UTC m=+0.155713863 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, container_name=ovn_controller) Feb 23 04:53:30 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:53:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v78: 177 pgs: 177 active+clean; 148 MiB data, 728 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 176 KiB/s wr, 11 op/s Feb 23 04:53:31 localhost openstack_network_exporter[243519]: ERROR 09:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:53:31 localhost openstack_network_exporter[243519]: Feb 23 04:53:31 localhost openstack_network_exporter[243519]: ERROR 09:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:53:31 localhost openstack_network_exporter[243519]: Feb 23 04:53:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 192 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s Feb 23 04:53:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 192 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s Feb 23 04:53:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:35.084 263679 INFO oslo.privsep.daemon [None req-3008ab2e-e5ca-4bef-8fba-bf493cdb76df - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmphlvbgnuq/privsep.sock']#033[00m Feb 23 04:53:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:53:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:53:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:53:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:53:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:53:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:53:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:35.747 263679 INFO oslo.privsep.daemon [None req-3008ab2e-e5ca-4bef-8fba-bf493cdb76df - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Feb 23 04:53:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:35.628 306057 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 23 04:53:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:35.633 306057 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 23 04:53:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:35.636 306057 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Feb 23 04:53:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:35.637 306057 INFO oslo.privsep.daemon [-] privsep daemon running as pid 306057#033[00m Feb 23 04:53:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:36.271 263679 INFO oslo.privsep.daemon [None req-3008ab2e-e5ca-4bef-8fba-bf493cdb76df - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpbns5cpdh/privsep.sock']#033[00m Feb 23 04:53:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:36.867 263679 INFO oslo.privsep.daemon [None req-3008ab2e-e5ca-4bef-8fba-bf493cdb76df - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Feb 23 04:53:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:36.767 306066 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 23 04:53:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:36.773 306066 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 23 04:53:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:36.776 306066 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Feb 23 04:53:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:36.776 306066 INFO oslo.privsep.daemon [-] privsep daemon running as pid 306066#033[00m Feb 23 04:53:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v81: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s Feb 23 04:53:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:37.744 263679 INFO oslo.privsep.daemon [None req-3008ab2e-e5ca-4bef-8fba-bf493cdb76df - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmptl2r1_jq/privsep.sock']#033[00m Feb 23 04:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:53:38 localhost podman[306078]: 2026-02-23 09:53:38.022683237 +0000 UTC m=+0.092944659 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:53:38 localhost podman[306078]: 2026-02-23 09:53:38.061891126 +0000 UTC m=+0.132152558 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Feb 23 04:53:38 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:53:38 localhost podman[306077]: 2026-02-23 09:53:38.075633313 +0000 UTC m=+0.148927557 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 04:53:38 localhost podman[306077]: 2026-02-23 09:53:38.107953234 +0000 UTC m=+0.181247498 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216) Feb 23 04:53:38 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:53:38 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:38.343 263679 INFO oslo.privsep.daemon [None req-3008ab2e-e5ca-4bef-8fba-bf493cdb76df - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Feb 23 04:53:38 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:38.239 306114 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 23 04:53:38 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:38.244 306114 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 23 04:53:38 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:38.247 306114 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Feb 23 04:53:38 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:38.247 306114 INFO oslo.privsep.daemon [-] privsep daemon running as pid 306114#033[00m Feb 23 04:53:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s Feb 23 04:53:39 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:39.704 263679 INFO neutron.agent.linux.ip_lib [None req-3008ab2e-e5ca-4bef-8fba-bf493cdb76df - - - - - -] Device tapd092b295-de cannot be used as it has no MAC address#033[00m Feb 23 04:53:39 localhost kernel: device tapd092b295-de entered promiscuous mode Feb 23 04:53:39 localhost NetworkManager[5987]: [1771840419.8128] manager: (tapd092b295-de): new Generic device (/org/freedesktop/NetworkManager/Devices/13) Feb 23 04:53:39 localhost ovn_controller[155966]: 2026-02-23T09:53:39Z|00025|binding|INFO|Claiming lport d092b295-de90-4b00-8eb2-21e2ea4d9d0b for this chassis. Feb 23 04:53:39 localhost ovn_controller[155966]: 2026-02-23T09:53:39Z|00026|binding|INFO|d092b295-de90-4b00-8eb2-21e2ea4d9d0b: Claiming unknown Feb 23 04:53:39 localhost systemd-udevd[306129]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:53:39 localhost journal[229268]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, ) Feb 23 04:53:39 localhost journal[229268]: hostname: np0005626465.localdomain Feb 23 04:53:39 localhost journal[229268]: ethtool ioctl error on tapd092b295-de: No such device Feb 23 04:53:39 localhost journal[229268]: ethtool ioctl error on tapd092b295-de: No such device Feb 23 04:53:39 localhost journal[229268]: ethtool ioctl error on tapd092b295-de: No such device Feb 23 04:53:39 localhost journal[229268]: ethtool ioctl error on tapd092b295-de: No such device Feb 23 04:53:39 localhost journal[229268]: ethtool ioctl error on tapd092b295-de: No such device Feb 23 04:53:39 localhost journal[229268]: ethtool ioctl error on tapd092b295-de: No such device Feb 23 04:53:39 localhost journal[229268]: ethtool ioctl error on tapd092b295-de: No such device Feb 23 04:53:39 localhost journal[229268]: ethtool ioctl error on tapd092b295-de: No such device Feb 23 04:53:39 localhost ovn_controller[155966]: 2026-02-23T09:53:39Z|00027|binding|INFO|Setting lport d092b295-de90-4b00-8eb2-21e2ea4d9d0b ovn-installed in OVS Feb 23 04:53:39 localhost ovn_controller[155966]: 2026-02-23T09:53:39Z|00028|binding|INFO|Setting lport d092b295-de90-4b00-8eb2-21e2ea4d9d0b up in Southbound Feb 23 04:53:39 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:39.938 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-a5e383fe-b918-4723-9dbc-32201feec87d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5e383fe-b918-4723-9dbc-32201feec87d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5e1135ba2724a69b072bbda0ea8476c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a4ae72f-8c09-4559-aec5-36314af9e25d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d092b295-de90-4b00-8eb2-21e2ea4d9d0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:53:39 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:39.939 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d092b295-de90-4b00-8eb2-21e2ea4d9d0b in datapath a5e383fe-b918-4723-9dbc-32201feec87d bound to our chassis#033[00m Feb 23 04:53:39 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:39.941 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port a01f5328-bafd-4c8a-823b-e9e79e3b2905 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:53:39 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:39.941 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5e383fe-b918-4723-9dbc-32201feec87d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:53:39 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:39.942 161842 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpkvi5x8ah/privsep.sock']#033[00m Feb 23 04:53:40 localhost ovn_controller[155966]: 2026-02-23T09:53:40Z|00029|ovn_bfd|INFO|Enabled BFD on interface ovn-5b0126-0 Feb 23 04:53:40 localhost ovn_controller[155966]: 2026-02-23T09:53:40Z|00030|ovn_bfd|INFO|Enabled BFD on interface ovn-585d62-0 Feb 23 04:53:40 localhost ovn_controller[155966]: 2026-02-23T09:53:40Z|00031|ovn_bfd|INFO|Enabled BFD on interface ovn-b9c72d-0 Feb 23 04:53:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:40.545 161842 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 23 04:53:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:40.546 161842 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpkvi5x8ah/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Feb 23 04:53:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:40.443 306186 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 23 04:53:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:40.449 306186 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 23 04:53:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:40.454 306186 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Feb 23 04:53:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:40.454 306186 INFO oslo.privsep.daemon [-] privsep daemon running as pid 306186#033[00m Feb 23 04:53:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:40.550 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[3d60465b-d394-4bfb-ac2e-dbb22e486612]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:53:40 localhost podman[306212]: Feb 23 04:53:40 localhost podman[306212]: 2026-02-23 09:53:40.681869637 +0000 UTC m=+0.072542211 container create 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0) Feb 23 04:53:40 localhost systemd[1]: Started libpod-conmon-3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a.scope. Feb 23 04:53:40 localhost podman[306212]: 2026-02-23 09:53:40.637073239 +0000 UTC m=+0.027745843 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:53:40 localhost systemd[1]: tmp-crun.jQoOEN.mount: Deactivated successfully. Feb 23 04:53:40 localhost systemd[1]: Started libcrun container. Feb 23 04:53:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535624e1c3195fb2fd0ed1f6149adbaacb68a65ad18280cc256e2fa9acdfb11f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:53:40 localhost podman[306212]: 2026-02-23 09:53:40.780100766 +0000 UTC m=+0.170773330 container init 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0) Feb 23 04:53:40 localhost podman[306212]: 2026-02-23 09:53:40.788587303 +0000 UTC m=+0.179259867 container start 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:53:40 localhost dnsmasq[306231]: started, version 2.85 cachesize 150 Feb 23 04:53:40 localhost dnsmasq[306231]: DNS service limited to local subnets Feb 23 04:53:40 localhost dnsmasq[306231]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:53:40 localhost dnsmasq[306231]: warning: no upstream servers configured Feb 23 04:53:40 localhost dnsmasq-dhcp[306231]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:53:40 localhost dnsmasq[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/addn_hosts - 0 addresses Feb 23 04:53:40 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/host Feb 23 04:53:40 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/opts Feb 23 04:53:40 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:40.849 263679 INFO neutron.agent.dhcp.agent [None req-920009f9-06c9-4049-bf78-a9b339d2a35b - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:53:35Z, description=, device_id=ff562894-8f3c-4c90-b19e-2812793ff3eb, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f9eaa207-e660-431f-a972-afc9e785a719, ip_allocation=immediate, mac_address=fa:16:3e:71:18:24, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:53:32Z, description=, dns_domain=, id=a5e383fe-b918-4723-9dbc-32201feec87d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-719193790-network, port_security_enabled=True, project_id=b5e1135ba2724a69b072bbda0ea8476c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=65236, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=308, status=ACTIVE, subnets=['d11d185f-563e-46d7-a7c0-0289cc20047f'], tags=[], tenant_id=b5e1135ba2724a69b072bbda0ea8476c, updated_at=2026-02-23T09:53:33Z, vlan_transparent=None, network_id=a5e383fe-b918-4723-9dbc-32201feec87d, port_security_enabled=False, project_id=b5e1135ba2724a69b072bbda0ea8476c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=316, status=DOWN, tags=[], tenant_id=b5e1135ba2724a69b072bbda0ea8476c, updated_at=2026-02-23T09:53:36Z on network a5e383fe-b918-4723-9dbc-32201feec87d#033[00m Feb 23 04:53:40 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:40.928 263679 INFO neutron.agent.dhcp.agent [None req-0dd59550-6835-4a57-9a09-ffa36e7626f9 - - - - - -] DHCP configuration for ports {'fbd3fa71-c070-4434-b248-fbe0a6b27a91'} is completed#033[00m Feb 23 04:53:41 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:41.026 306186 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:53:41 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:41.027 306186 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:53:41 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:41.027 306186 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:53:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v83: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s Feb 23 04:53:41 localhost dnsmasq[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/addn_hosts - 1 addresses Feb 23 04:53:41 localhost podman[306249]: 2026-02-23 09:53:41.095683765 +0000 UTC m=+0.065541868 container kill 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:53:41 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/host Feb 23 04:53:41 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/opts Feb 23 04:53:41 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:41.163 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[aabebb99-d2b6-4ec1-a684-81d7a9894747]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:53:41 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:41.214 263679 INFO neutron.agent.dhcp.agent [None req-74a1d8a1-9c43-4b50-938a-90daef0cc7c9 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:53:35Z, description=, device_id=ff562894-8f3c-4c90-b19e-2812793ff3eb, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f9eaa207-e660-431f-a972-afc9e785a719, ip_allocation=immediate, mac_address=fa:16:3e:71:18:24, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:53:32Z, description=, dns_domain=, id=a5e383fe-b918-4723-9dbc-32201feec87d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-719193790-network, port_security_enabled=True, project_id=b5e1135ba2724a69b072bbda0ea8476c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=65236, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=308, status=ACTIVE, subnets=['d11d185f-563e-46d7-a7c0-0289cc20047f'], tags=[], tenant_id=b5e1135ba2724a69b072bbda0ea8476c, updated_at=2026-02-23T09:53:33Z, vlan_transparent=None, network_id=a5e383fe-b918-4723-9dbc-32201feec87d, port_security_enabled=False, project_id=b5e1135ba2724a69b072bbda0ea8476c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=316, status=DOWN, tags=[], tenant_id=b5e1135ba2724a69b072bbda0ea8476c, updated_at=2026-02-23T09:53:36Z on network a5e383fe-b918-4723-9dbc-32201feec87d#033[00m Feb 23 04:53:41 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:41.400 263679 INFO neutron.agent.dhcp.agent [None req-a2cddba6-8d65-4815-9347-06292fea8d9b - - - - - -] DHCP configuration for ports {'f9eaa207-e660-431f-a972-afc9e785a719'} is completed#033[00m Feb 23 04:53:41 localhost dnsmasq[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/addn_hosts - 1 addresses Feb 23 04:53:41 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/host Feb 23 04:53:41 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/opts Feb 23 04:53:41 localhost podman[306284]: 2026-02-23 09:53:41.438595265 +0000 UTC m=+0.057800604 container kill 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:53:41 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:41.648 263679 INFO neutron.agent.dhcp.agent [None req-16a218f9-2347-46de-b317-d78c34843897 - - - - - -] DHCP configuration for ports {'f9eaa207-e660-431f-a972-afc9e785a719'} is completed#033[00m Feb 23 04:53:42 localhost podman[241086]: time="2026-02-23T09:53:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:53:42 localhost podman[241086]: @ - - [23/Feb/2026:09:53:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1" Feb 23 04:53:42 localhost podman[241086]: @ - - [23/Feb/2026:09:53:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18270 "" "Go-http-client/1.1" Feb 23 04:53:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:53:43 localhost systemd[1]: tmp-crun.0awd50.mount: Deactivated successfully. Feb 23 04:53:43 localhost podman[306305]: 2026-02-23 09:53:43.012264436 +0000 UTC m=+0.090232558 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:53:43 localhost podman[306305]: 2026-02-23 09:53:43.020365501 +0000 UTC m=+0.098333643 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:53:43 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:53:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 95 op/s Feb 23 04:53:44 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e89 e89: 6 total, 6 up, 6 in Feb 23 04:53:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s Feb 23 04:53:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:46 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e90 e90: 6 total, 6 up, 6 in Feb 23 04:53:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 225 MiB data, 849 MiB used, 41 GiB / 42 GiB avail; 480 KiB/s rd, 3.2 MiB/s wr, 136 op/s Feb 23 04:53:48 localhost neutron_sriov_agent[256355]: 2026-02-23 09:53:48.150 2 INFO neutron.agent.securitygroups_rpc [None req-a2d5984e-7e32-490c-a625-105e3b4d8b68 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Security group member updated ['5e2da0ff-f592-42de-9188-06e3b0bca61b']#033[00m Feb 23 04:53:48 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:48.198 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:53:47Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b, ip_allocation=immediate, mac_address=fa:16:3e:eb:c0:be, name=tempest-parent-1619502580, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:53:32Z, description=, dns_domain=, id=a5e383fe-b918-4723-9dbc-32201feec87d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-719193790-network, port_security_enabled=True, project_id=b5e1135ba2724a69b072bbda0ea8476c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=65236, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=308, status=ACTIVE, subnets=['d11d185f-563e-46d7-a7c0-0289cc20047f'], tags=[], tenant_id=b5e1135ba2724a69b072bbda0ea8476c, updated_at=2026-02-23T09:53:33Z, vlan_transparent=None, network_id=a5e383fe-b918-4723-9dbc-32201feec87d, port_security_enabled=True, project_id=b5e1135ba2724a69b072bbda0ea8476c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['5e2da0ff-f592-42de-9188-06e3b0bca61b'], standard_attr_id=410, status=DOWN, tags=[], tenant_id=b5e1135ba2724a69b072bbda0ea8476c, updated_at=2026-02-23T09:53:48Z on network a5e383fe-b918-4723-9dbc-32201feec87d#033[00m Feb 23 04:53:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:48.312 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:53:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:48.313 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:53:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:48.314 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:53:48 localhost dnsmasq[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/addn_hosts - 2 addresses Feb 23 04:53:48 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/host Feb 23 04:53:48 localhost podman[306349]: 2026-02-23 09:53:48.407538046 +0000 UTC m=+0.057256837 container kill 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:53:48 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/opts Feb 23 04:53:48 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:48.639 263679 INFO neutron.agent.dhcp.agent [None req-7680f726-5398-4ffb-9fed-a4de4a1bbab0 - - - - - -] DHCP configuration for ports {'68f77b5f-9ee1-445a-9bc9-8dae82293c2b'} is completed#033[00m Feb 23 04:53:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e91 e91: 6 total, 6 up, 6 in Feb 23 04:53:48 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:48.822 263679 INFO neutron.agent.linux.ip_lib [None req-3838e987-b29c-43ce-9cd0-4469b0d2da1b - - - - - -] Device tap50bab87b-66 cannot be used as it has no MAC address#033[00m Feb 23 04:53:48 localhost kernel: device tap50bab87b-66 entered promiscuous mode Feb 23 04:53:48 localhost NetworkManager[5987]: [1771840428.8906] manager: (tap50bab87b-66): new Generic device (/org/freedesktop/NetworkManager/Devices/14) Feb 23 04:53:48 localhost systemd-udevd[306382]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:53:48 localhost ovn_controller[155966]: 2026-02-23T09:53:48Z|00032|binding|INFO|Claiming lport 50bab87b-669c-4165-acde-bbd61d16beb7 for this chassis. Feb 23 04:53:48 localhost ovn_controller[155966]: 2026-02-23T09:53:48Z|00033|binding|INFO|50bab87b-669c-4165-acde-bbd61d16beb7: Claiming unknown Feb 23 04:53:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:48.902 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bba12cc9382b485789a88c5fc615cc96', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d345f9c9-fdaf-4f2f-a07b-06cdb5ba17a4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=50bab87b-669c-4165-acde-bbd61d16beb7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:53:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:48.903 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 50bab87b-669c-4165-acde-bbd61d16beb7 in datapath 785d6e65-4b4f-461e-b0b4-cb1d9085a3e7 bound to our chassis#033[00m Feb 23 04:53:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:48.906 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 785d6e65-4b4f-461e-b0b4-cb1d9085a3e7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:53:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:48.919 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[45c3ac38-2d77-4a15-87aa-947c865d3622]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:53:48 localhost ovn_controller[155966]: 2026-02-23T09:53:48Z|00034|binding|INFO|Setting lport 50bab87b-669c-4165-acde-bbd61d16beb7 ovn-installed in OVS Feb 23 04:53:48 localhost ovn_controller[155966]: 2026-02-23T09:53:48Z|00035|binding|INFO|Setting lport 50bab87b-669c-4165-acde-bbd61d16beb7 up in Southbound Feb 23 04:53:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v90: 177 pgs: 177 active+clean; 225 MiB data, 849 MiB used, 41 GiB / 42 GiB avail; 640 KiB/s rd, 4.3 MiB/s wr, 181 op/s Feb 23 04:53:50 localhost podman[306438]: Feb 23 04:53:50 localhost podman[306438]: 2026-02-23 09:53:50.01357603 +0000 UTC m=+0.090397812 container create 852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 23 04:53:50 localhost systemd[1]: Started libpod-conmon-852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1.scope. Feb 23 04:53:50 localhost podman[306438]: 2026-02-23 09:53:49.968598396 +0000 UTC m=+0.045420038 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:53:50 localhost systemd[1]: Started libcrun container. Feb 23 04:53:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4d30d92d3d8518ab63edee4c6d36c83e25c3f6f2c9b5c4986256c0cc0b5ab9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:53:50 localhost podman[306438]: 2026-02-23 09:53:50.095627308 +0000 UTC m=+0.172448880 container init 852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216) Feb 23 04:53:50 localhost podman[306438]: 2026-02-23 09:53:50.106542829 +0000 UTC m=+0.183364411 container start 852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:53:50 localhost dnsmasq[306456]: started, version 2.85 cachesize 150 Feb 23 04:53:50 localhost dnsmasq[306456]: DNS service limited to local subnets Feb 23 04:53:50 localhost dnsmasq[306456]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:53:50 localhost dnsmasq[306456]: warning: no upstream servers configured Feb 23 04:53:50 localhost dnsmasq-dhcp[306456]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:53:50 localhost dnsmasq[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/addn_hosts - 0 addresses Feb 23 04:53:50 localhost dnsmasq-dhcp[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/host Feb 23 04:53:50 localhost dnsmasq-dhcp[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/opts Feb 23 04:53:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:50.304 263679 INFO neutron.agent.dhcp.agent [None req-0ef31c4f-12f7-4148-a77f-ce15be7875ba - - - - - -] DHCP configuration for ports {'852dbdc2-7a86-4897-bcc5-85cb5c000b67'} is completed#033[00m Feb 23 04:53:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e92 e92: 6 total, 6 up, 6 in Feb 23 04:53:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v92: 177 pgs: 177 active+clean; 225 MiB data, 851 MiB used, 41 GiB / 42 GiB avail; 766 KiB/s rd, 4.3 MiB/s wr, 295 op/s Feb 23 04:53:51 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Feb 23 04:53:52 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:52.201 263679 INFO neutron.agent.linux.ip_lib [None req-145f6871-dc45-4022-94e3-da2253364a4f - - - - - -] Device tap6a90be9b-07 cannot be used as it has no MAC address#033[00m Feb 23 04:53:52 localhost kernel: device tap6a90be9b-07 entered promiscuous mode Feb 23 04:53:52 localhost NetworkManager[5987]: [1771840432.2308] manager: (tap6a90be9b-07): new Generic device (/org/freedesktop/NetworkManager/Devices/15) Feb 23 04:53:52 localhost ovn_controller[155966]: 2026-02-23T09:53:52Z|00036|binding|INFO|Claiming lport 6a90be9b-0721-4516-9ff9-bf27b6a45c1c for this chassis. Feb 23 04:53:52 localhost ovn_controller[155966]: 2026-02-23T09:53:52Z|00037|binding|INFO|6a90be9b-0721-4516-9ff9-bf27b6a45c1c: Claiming unknown Feb 23 04:53:52 localhost systemd-udevd[306468]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:53:52 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:52.241 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-952a96fb-c5b6-453c-bd85-60bdea92095e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-952a96fb-c5b6-453c-bd85-60bdea92095e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '630f2b45468647c9971c461e6fe8aad9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d266ad9-f71f-4a7f-a7d1-454e3de9edd5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6a90be9b-0721-4516-9ff9-bf27b6a45c1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:53:52 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:52.243 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 6a90be9b-0721-4516-9ff9-bf27b6a45c1c in datapath 952a96fb-c5b6-453c-bd85-60bdea92095e bound to our chassis#033[00m Feb 23 04:53:52 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:52.244 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 952a96fb-c5b6-453c-bd85-60bdea92095e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:53:52 localhost ovn_metadata_agent[161837]: 2026-02-23 09:53:52.245 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[b42bbafe-2d30-4af8-8c59-88c87c8c5b86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:53:52 localhost journal[229268]: ethtool ioctl error on tap6a90be9b-07: No such device Feb 23 04:53:52 localhost ovn_controller[155966]: 2026-02-23T09:53:52Z|00038|binding|INFO|Setting lport 6a90be9b-0721-4516-9ff9-bf27b6a45c1c ovn-installed in OVS Feb 23 04:53:52 localhost ovn_controller[155966]: 2026-02-23T09:53:52Z|00039|binding|INFO|Setting lport 6a90be9b-0721-4516-9ff9-bf27b6a45c1c up in Southbound Feb 23 04:53:52 localhost journal[229268]: ethtool ioctl error on tap6a90be9b-07: No such device Feb 23 04:53:52 localhost journal[229268]: ethtool ioctl error on tap6a90be9b-07: No such device Feb 23 04:53:52 localhost journal[229268]: ethtool ioctl error on tap6a90be9b-07: No such device Feb 23 04:53:52 localhost journal[229268]: ethtool ioctl error on tap6a90be9b-07: No such device Feb 23 04:53:52 localhost journal[229268]: ethtool ioctl error on tap6a90be9b-07: No such device Feb 23 04:53:52 localhost journal[229268]: ethtool ioctl error on tap6a90be9b-07: No such device Feb 23 04:53:52 localhost journal[229268]: ethtool ioctl error on tap6a90be9b-07: No such device Feb 23 04:53:52 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:52.537 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:53:52Z, description=, device_id=8119fbd6-c433-4e48-84f9-f5294a3a362c, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b872e178-4cd5-4a56-b540-e3d778f03572, ip_allocation=immediate, mac_address=fa:16:3e:6f:c3:0a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:53:47Z, description=, dns_domain=, id=785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-1903453246-network, port_security_enabled=True, project_id=bba12cc9382b485789a88c5fc615cc96, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58192, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=401, status=ACTIVE, subnets=['8e6dfd6a-1056-48cd-bb9a-a81e5b93e10f'], tags=[], tenant_id=bba12cc9382b485789a88c5fc615cc96, updated_at=2026-02-23T09:53:47Z, vlan_transparent=None, network_id=785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, port_security_enabled=False, project_id=bba12cc9382b485789a88c5fc615cc96, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=455, status=DOWN, tags=[], tenant_id=bba12cc9382b485789a88c5fc615cc96, updated_at=2026-02-23T09:53:52Z on network 785d6e65-4b4f-461e-b0b4-cb1d9085a3e7#033[00m Feb 23 04:53:52 localhost neutron_sriov_agent[256355]: 2026-02-23 09:53:52.569 2 INFO neutron.agent.securitygroups_rpc [None req-92b63054-7ec2-4ccb-bee7-2b93ea819111 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Security group member updated ['5e2da0ff-f592-42de-9188-06e3b0bca61b']#033[00m Feb 23 04:53:52 localhost systemd[1]: tmp-crun.bCLbJI.mount: Deactivated successfully. Feb 23 04:53:52 localhost podman[306525]: 2026-02-23 09:53:52.771977738 +0000 UTC m=+0.050816292 container kill 852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:53:52 localhost dnsmasq[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/addn_hosts - 1 addresses Feb 23 04:53:52 localhost dnsmasq-dhcp[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/host Feb 23 04:53:52 localhost dnsmasq-dhcp[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/opts Feb 23 04:53:52 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e93 e93: 6 total, 6 up, 6 in Feb 23 04:53:52 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:52.973 263679 INFO neutron.agent.dhcp.agent [None req-bfd28eab-7c25-497e-adcb-5616ac1ef3c9 - - - - - -] DHCP configuration for ports {'b872e178-4cd5-4a56-b540-e3d778f03572'} is completed#033[00m Feb 23 04:53:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v94: 177 pgs: 177 active+clean; 225 MiB data, 851 MiB used, 41 GiB / 42 GiB avail; 126 KiB/s rd, 54 KiB/s wr, 113 op/s Feb 23 04:53:53 localhost podman[306577]: Feb 23 04:53:53 localhost podman[306577]: 2026-02-23 09:53:53.32934566 +0000 UTC m=+0.090582858 container create 36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-952a96fb-c5b6-453c-bd85-60bdea92095e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:53:53 localhost systemd[1]: Started libpod-conmon-36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67.scope. Feb 23 04:53:53 localhost podman[306577]: 2026-02-23 09:53:53.284973235 +0000 UTC m=+0.046210533 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:53:53 localhost systemd[1]: Started libcrun container. Feb 23 04:53:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3fb9b4239bfb80e9845917fd0295eff0f42db721a21831fdd58df164bffc76/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:53:53 localhost podman[306577]: 2026-02-23 09:53:53.407800199 +0000 UTC m=+0.169037417 container init 36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-952a96fb-c5b6-453c-bd85-60bdea92095e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:53:53 localhost podman[306577]: 2026-02-23 09:53:53.417309078 +0000 UTC m=+0.178546276 container start 36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-952a96fb-c5b6-453c-bd85-60bdea92095e, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:53:53 localhost dnsmasq[306595]: started, version 2.85 cachesize 150 Feb 23 04:53:53 localhost dnsmasq[306595]: DNS service limited to local subnets Feb 23 04:53:53 localhost dnsmasq[306595]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:53:53 localhost dnsmasq[306595]: warning: no upstream servers configured Feb 23 04:53:53 localhost dnsmasq-dhcp[306595]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:53:53 localhost dnsmasq[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/addn_hosts - 0 addresses Feb 23 04:53:53 localhost dnsmasq-dhcp[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/host Feb 23 04:53:53 localhost dnsmasq-dhcp[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/opts Feb 23 04:53:53 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:53.587 263679 INFO neutron.agent.dhcp.agent [None req-9e41111e-cf90-4b40-b52b-b70c23d1007f - - - - - -] DHCP configuration for ports {'54dba55e-d0cf-410c-b57f-d5e9f684c796'} is completed#033[00m Feb 23 04:53:53 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e94 e94: 6 total, 6 up, 6 in Feb 23 04:53:54 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:54.078 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:53:52Z, description=, device_id=8119fbd6-c433-4e48-84f9-f5294a3a362c, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b872e178-4cd5-4a56-b540-e3d778f03572, ip_allocation=immediate, mac_address=fa:16:3e:6f:c3:0a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:53:47Z, description=, dns_domain=, id=785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-1903453246-network, port_security_enabled=True, project_id=bba12cc9382b485789a88c5fc615cc96, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58192, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=401, status=ACTIVE, subnets=['8e6dfd6a-1056-48cd-bb9a-a81e5b93e10f'], tags=[], tenant_id=bba12cc9382b485789a88c5fc615cc96, updated_at=2026-02-23T09:53:47Z, vlan_transparent=None, network_id=785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, port_security_enabled=False, project_id=bba12cc9382b485789a88c5fc615cc96, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=455, status=DOWN, tags=[], tenant_id=bba12cc9382b485789a88c5fc615cc96, updated_at=2026-02-23T09:53:52Z on network 785d6e65-4b4f-461e-b0b4-cb1d9085a3e7#033[00m Feb 23 04:53:54 localhost podman[306613]: 2026-02-23 09:53:54.35258592 +0000 UTC m=+0.058413822 container kill 852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:53:54 localhost dnsmasq[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/addn_hosts - 1 addresses Feb 23 04:53:54 localhost dnsmasq-dhcp[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/host Feb 23 04:53:54 localhost dnsmasq-dhcp[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/opts Feb 23 04:53:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 177 active+clean; 225 MiB data, 851 MiB used, 41 GiB / 42 GiB avail; 126 KiB/s rd, 54 KiB/s wr, 113 op/s Feb 23 04:53:55 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:55.120 263679 INFO neutron.agent.dhcp.agent [None req-0f0e80bb-5692-463d-bdd7-5dc92c8c5875 - - - - - -] DHCP configuration for ports {'b872e178-4cd5-4a56-b540-e3d778f03572'} is completed#033[00m Feb 23 04:53:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:53:56 localhost podman[306634]: 2026-02-23 09:53:56.011898248 +0000 UTC m=+0.078940405 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 04:53:56 localhost podman[306634]: 2026-02-23 09:53:56.025212092 +0000 UTC m=+0.092254259 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, version=9.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9/ubi-minimal) Feb 23 04:53:56 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:53:56 localhost podman[306633]: 2026-02-23 09:53:56.117204692 +0000 UTC m=+0.186116456 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:53:56 localhost podman[306633]: 2026-02-23 09:53:56.153843333 +0000 UTC m=+0.222755157 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:53:56 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:53:56 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e95 e95: 6 total, 6 up, 6 in Feb 23 04:53:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 304 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 141 op/s Feb 23 04:53:57 localhost sshd[306676]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:53:57 localhost nova_compute[280321]: 2026-02-23 09:53:57.940 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Acquiring lock "78070789-b766-4674-b4e1-8040cbf7346b" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:53:57 localhost nova_compute[280321]: 2026-02-23 09:53:57.941 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:53:57 localhost nova_compute[280321]: 2026-02-23 09:53:57.957 280325 DEBUG nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.068 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.069 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.075 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.075 280325 INFO nova.compute.claims [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Claim successful on node np0005626465.localdomain#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.176 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:53:58 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:58.322 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:53:57Z, description=, device_id=126df294-dd68-4166-b3e4-da6bff186543, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ae4e52ef-108d-4b6c-8139-d57e5fef5e51, ip_allocation=immediate, mac_address=fa:16:3e:4d:13:50, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:53:50Z, description=, dns_domain=, id=952a96fb-c5b6-453c-bd85-60bdea92095e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPDetailsNegativeTestJSON-1940783534-network, port_security_enabled=True, project_id=630f2b45468647c9971c461e6fe8aad9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18199, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=448, status=ACTIVE, subnets=['6eaf65c2-d22b-455b-8db6-0597c519a5b4'], tags=[], tenant_id=630f2b45468647c9971c461e6fe8aad9, updated_at=2026-02-23T09:53:51Z, vlan_transparent=None, network_id=952a96fb-c5b6-453c-bd85-60bdea92095e, port_security_enabled=False, project_id=630f2b45468647c9971c461e6fe8aad9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=494, status=DOWN, tags=[], tenant_id=630f2b45468647c9971c461e6fe8aad9, updated_at=2026-02-23T09:53:58Z on network 952a96fb-c5b6-453c-bd85-60bdea92095e#033[00m Feb 23 04:53:58 localhost neutron_sriov_agent[256355]: 2026-02-23 09:53:58.479 2 INFO neutron.agent.securitygroups_rpc [req-eb0b5adf-b7bd-4216-9e46-c2d60917a5c7 req-02734a2f-bd2e-4435-92b8-64f041df35e7 c511c0d31bd1497ea63920bacbc29b16 bba12cc9382b485789a88c5fc615cc96 - - default default] Security group rule updated ['9bb94a7a-3596-41a3-a016-4ce3c9b7d984']#033[00m Feb 23 04:53:58 localhost systemd[1]: tmp-crun.b35Ayz.mount: Deactivated successfully. Feb 23 04:53:58 localhost dnsmasq[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/addn_hosts - 1 addresses Feb 23 04:53:58 localhost podman[306713]: 2026-02-23 09:53:58.560274459 +0000 UTC m=+0.072083318 container kill 36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-952a96fb-c5b6-453c-bd85-60bdea92095e, tcib_managed=true, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2) Feb 23 04:53:58 localhost dnsmasq-dhcp[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/host Feb 23 04:53:58 localhost dnsmasq-dhcp[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/opts Feb 23 04:53:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:53:58 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1200777599' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.668 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.674 280325 DEBUG nova.compute.provider_tree [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.688 280325 DEBUG nova.scheduler.client.report [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.717 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.718 280325 DEBUG nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Feb 23 04:53:58 localhost neutron_sriov_agent[256355]: 2026-02-23 09:53:58.755 2 INFO neutron.agent.securitygroups_rpc [req-a40b7300-b557-4abd-b7da-db42c2c5294c req-3cf01c79-11ae-4df5-84a2-12d00520d629 c511c0d31bd1497ea63920bacbc29b16 bba12cc9382b485789a88c5fc615cc96 - - default default] Security group rule updated ['9bb94a7a-3596-41a3-a016-4ce3c9b7d984']#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.776 280325 DEBUG nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.777 280325 DEBUG nova.network.neutron [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Feb 23 04:53:58 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:53:58.789 263679 INFO neutron.agent.dhcp.agent [None req-4ec6a395-aeb2-44f6-9a6e-ddbee40d76ee - - - - - -] DHCP configuration for ports {'ae4e52ef-108d-4b6c-8139-d57e5fef5e51'} is completed#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.789 280325 INFO nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.810 280325 DEBUG nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.890 280325 DEBUG nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.892 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.893 280325 INFO nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Creating image(s)#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.937 280325 DEBUG nova.storage.rbd_utils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] rbd image 78070789-b766-4674-b4e1-8040cbf7346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:53:58 localhost nova_compute[280321]: 2026-02-23 09:53:58.979 280325 DEBUG nova.storage.rbd_utils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] rbd image 78070789-b766-4674-b4e1-8040cbf7346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:53:59 localhost nova_compute[280321]: 2026-02-23 09:53:59.020 280325 DEBUG nova.storage.rbd_utils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] rbd image 78070789-b766-4674-b4e1-8040cbf7346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:53:59 localhost nova_compute[280321]: 2026-02-23 09:53:59.025 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Acquiring lock "be7ecb9fde249dcbd37d38278f2f533f45a26c75" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:53:59 localhost nova_compute[280321]: 2026-02-23 09:53:59.027 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lock "be7ecb9fde249dcbd37d38278f2f533f45a26c75" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:53:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 304 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.4 MiB/s rd, 7.4 MiB/s wr, 134 op/s Feb 23 04:53:59 localhost nova_compute[280321]: 2026-02-23 09:53:59.899 280325 DEBUG nova.virt.libvirt.imagebackend [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Image locations are: [{'url': 'rbd://f1fea371-cb69-578d-a3d0-b5c472a84b46/images/d08f8876-d97b-493b-b16b-caf91668eecb/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f1fea371-cb69-578d-a3d0-b5c472a84b46/images/d08f8876-d97b-493b-b16b-caf91668eecb/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Feb 23 04:54:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:00.036 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:53:57Z, description=, device_id=126df294-dd68-4166-b3e4-da6bff186543, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ae4e52ef-108d-4b6c-8139-d57e5fef5e51, ip_allocation=immediate, mac_address=fa:16:3e:4d:13:50, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:53:50Z, description=, dns_domain=, id=952a96fb-c5b6-453c-bd85-60bdea92095e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPDetailsNegativeTestJSON-1940783534-network, port_security_enabled=True, project_id=630f2b45468647c9971c461e6fe8aad9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18199, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=448, status=ACTIVE, subnets=['6eaf65c2-d22b-455b-8db6-0597c519a5b4'], tags=[], tenant_id=630f2b45468647c9971c461e6fe8aad9, updated_at=2026-02-23T09:53:51Z, vlan_transparent=None, network_id=952a96fb-c5b6-453c-bd85-60bdea92095e, port_security_enabled=False, project_id=630f2b45468647c9971c461e6fe8aad9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=494, status=DOWN, tags=[], tenant_id=630f2b45468647c9971c461e6fe8aad9, updated_at=2026-02-23T09:53:58Z on network 952a96fb-c5b6-453c-bd85-60bdea92095e#033[00m Feb 23 04:54:00 localhost nova_compute[280321]: 2026-02-23 09:54:00.195 280325 WARNING oslo_policy.policy [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Feb 23 04:54:00 localhost nova_compute[280321]: 2026-02-23 09:54:00.196 280325 WARNING oslo_policy.policy [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Feb 23 04:54:00 localhost nova_compute[280321]: 2026-02-23 09:54:00.200 280325 DEBUG nova.policy [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0b7edff084ac4cda88d2d8f5182da779', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b5e1135ba2724a69b072bbda0ea8476c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Feb 23 04:54:00 localhost systemd[1]: tmp-crun.4k3Jce.mount: Deactivated successfully. Feb 23 04:54:00 localhost dnsmasq[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/addn_hosts - 1 addresses Feb 23 04:54:00 localhost dnsmasq-dhcp[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/host Feb 23 04:54:00 localhost dnsmasq-dhcp[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/opts Feb 23 04:54:00 localhost podman[306808]: 2026-02-23 09:54:00.252350472 +0000 UTC m=+0.068165535 container kill 36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-952a96fb-c5b6-453c-bd85-60bdea92095e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:54:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:54:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:00 localhost podman[306821]: 2026-02-23 09:54:00.368590855 +0000 UTC m=+0.078513311 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.43.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:54:00 localhost podman[306821]: 2026-02-23 09:54:00.438939536 +0000 UTC m=+0.148861992 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:54:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:00.446 263679 INFO neutron.agent.dhcp.agent [None req-a76c0dc4-38ea-4699-8212-58e59144642d - - - - - -] DHCP configuration for ports {'ae4e52ef-108d-4b6c-8139-d57e5fef5e51'} is completed#033[00m Feb 23 04:54:00 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:54:00 localhost sshd[306853]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:54:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:00.937 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005626465.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:53:47Z, description=, device_id=78070789-b766-4674-b4e1-8040cbf7346b, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-liveautoblockmigrationv225test-server-280191745, extra_dhcp_opts=[], fixed_ips=[], id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b, ip_allocation=immediate, mac_address=fa:16:3e:eb:c0:be, name=tempest-parent-1619502580, network_id=a5e383fe-b918-4723-9dbc-32201feec87d, port_security_enabled=True, project_id=b5e1135ba2724a69b072bbda0ea8476c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['5e2da0ff-f592-42de-9188-06e3b0bca61b'], standard_attr_id=410, status=DOWN, tags=[], tenant_id=b5e1135ba2724a69b072bbda0ea8476c, trunk_details=sub_ports=[], trunk_id=95b61762-73a8-4898-b5d1-96beb9397be7, updated_at=2026-02-23T09:54:00Z on network a5e383fe-b918-4723-9dbc-32201feec87d#033[00m Feb 23 04:54:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 177 active+clean; 289 MiB data, 965 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 121 op/s Feb 23 04:54:01 localhost dnsmasq[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/addn_hosts - 2 addresses Feb 23 04:54:01 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/host Feb 23 04:54:01 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/opts Feb 23 04:54:01 localhost podman[306872]: 2026-02-23 09:54:01.136023574 +0000 UTC m=+0.057740966 container kill 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 04:54:01 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:01.384 263679 INFO neutron.agent.dhcp.agent [None req-809e1369-bce3-4728-98df-f2223f134e12 - - - - - -] DHCP configuration for ports {'68f77b5f-9ee1-445a-9bc9-8dae82293c2b'} is completed#033[00m Feb 23 04:54:01 localhost nova_compute[280321]: 2026-02-23 09:54:01.649 280325 DEBUG nova.network.neutron [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Successfully updated port: 68f77b5f-9ee1-445a-9bc9-8dae82293c2b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Feb 23 04:54:01 localhost nova_compute[280321]: 2026-02-23 09:54:01.670 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Acquiring lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:54:01 localhost nova_compute[280321]: 2026-02-23 09:54:01.671 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Acquired lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:54:01 localhost nova_compute[280321]: 2026-02-23 09:54:01.671 280325 DEBUG nova.network.neutron [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 23 04:54:01 localhost nova_compute[280321]: 2026-02-23 09:54:01.753 280325 DEBUG nova.network.neutron [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 23 04:54:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e96 e96: 6 total, 6 up, 6 in Feb 23 04:54:01 localhost openstack_network_exporter[243519]: ERROR 09:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:54:01 localhost openstack_network_exporter[243519]: Feb 23 04:54:01 localhost openstack_network_exporter[243519]: ERROR 09:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:54:01 localhost openstack_network_exporter[243519]: Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.299 280325 DEBUG nova.network.neutron [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Updating instance_info_cache with network_info: [{"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.330 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Releasing lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.331 280325 DEBUG nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Instance network_info: |[{"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.483 280325 DEBUG nova.compute.manager [req-bfa06583-07d3-43f1-905b-24b364fb99af req-788d94e8-0512-4bdc-888c-cf75173cf1cc 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received event network-changed-68f77b5f-9ee1-445a-9bc9-8dae82293c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.484 280325 DEBUG nova.compute.manager [req-bfa06583-07d3-43f1-905b-24b364fb99af req-788d94e8-0512-4bdc-888c-cf75173cf1cc 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Refreshing instance network info cache due to event network-changed-68f77b5f-9ee1-445a-9bc9-8dae82293c2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.484 280325 DEBUG oslo_concurrency.lockutils [req-bfa06583-07d3-43f1-905b-24b364fb99af req-788d94e8-0512-4bdc-888c-cf75173cf1cc 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.485 280325 DEBUG oslo_concurrency.lockutils [req-bfa06583-07d3-43f1-905b-24b364fb99af req-788d94e8-0512-4bdc-888c-cf75173cf1cc 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquired lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.485 280325 DEBUG nova.network.neutron [req-bfa06583-07d3-43f1-905b-24b364fb99af req-788d94e8-0512-4bdc-888c-cf75173cf1cc 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Refreshing network info cache for port 68f77b5f-9ee1-445a-9bc9-8dae82293c2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.510 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.565 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75.part --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.566 280325 DEBUG nova.virt.images [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] d08f8876-d97b-493b-b16b-caf91668eecb was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.567 280325 DEBUG nova.privsep.utils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.568 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75.part /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.832 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75.part /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75.converted" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.836 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.912 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.914 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lock "be7ecb9fde249dcbd37d38278f2f533f45a26c75" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 3.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.956 280325 DEBUG nova.storage.rbd_utils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] rbd image 78070789-b766-4674-b4e1-8040cbf7346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:54:02 localhost nova_compute[280321]: 2026-02-23 09:54:02.962 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75 78070789-b766-4674-b4e1-8040cbf7346b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 224 MiB data, 880 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 147 op/s Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.570 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75 78070789-b766-4674-b4e1-8040cbf7346b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.664 280325 DEBUG nova.network.neutron [req-bfa06583-07d3-43f1-905b-24b364fb99af req-788d94e8-0512-4bdc-888c-cf75173cf1cc 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Updated VIF entry in instance network info cache for port 68f77b5f-9ee1-445a-9bc9-8dae82293c2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.665 280325 DEBUG nova.network.neutron [req-bfa06583-07d3-43f1-905b-24b364fb99af req-788d94e8-0512-4bdc-888c-cf75173cf1cc 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Updating instance_info_cache with network_info: [{"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.671 280325 DEBUG nova.storage.rbd_utils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] resizing rbd image 78070789-b766-4674-b4e1-8040cbf7346b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.721 280325 DEBUG oslo_concurrency.lockutils [req-bfa06583-07d3-43f1-905b-24b364fb99af req-788d94e8-0512-4bdc-888c-cf75173cf1cc 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Releasing lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.824 280325 DEBUG nova.objects.instance [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lazy-loading 'migration_context' on Instance uuid 78070789-b766-4674-b4e1-8040cbf7346b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.849 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.849 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Ensure instance console log exists: /var/lib/nova/instances/78070789-b766-4674-b4e1-8040cbf7346b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.850 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.850 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.851 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.855 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Start _get_guest_xml network_info=[{"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-23T09:52:33Z,direct_url=,disk_format='qcow2',id=d08f8876-d97b-493b-b16b-caf91668eecb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='37b8098efb0d4ecc90b451a2db0e966f',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-23T09:52:35Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'd08f8876-d97b-493b-b16b-caf91668eecb'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.860 280325 WARNING nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.864 280325 DEBUG nova.virt.libvirt.host [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Searching host: 'np0005626465.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.864 280325 DEBUG nova.virt.libvirt.host [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.865 280325 DEBUG nova.virt.libvirt.host [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Searching host: 'np0005626465.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.866 280325 DEBUG nova.virt.libvirt.host [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.866 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.866 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-23T09:52:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='dd9292ba-25cb-4da3-92e1-803e436b1b2c',id=6,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-23T09:52:33Z,direct_url=,disk_format='qcow2',id=d08f8876-d97b-493b-b16b-caf91668eecb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='37b8098efb0d4ecc90b451a2db0e966f',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-23T09:52:35Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.867 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.867 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.867 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.867 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.868 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.868 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.868 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.868 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.869 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.869 280325 DEBUG nova.virt.hardware [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.872 280325 DEBUG nova.privsep.utils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Feb 23 04:54:03 localhost nova_compute[280321]: 2026-02-23 09:54:03.873 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 23 04:54:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1891678803' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.364 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.409 280325 DEBUG nova.storage.rbd_utils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] rbd image 78070789-b766-4674-b4e1-8040cbf7346b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.416 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:04 localhost dnsmasq[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/addn_hosts - 0 addresses Feb 23 04:54:04 localhost dnsmasq-dhcp[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/host Feb 23 04:54:04 localhost dnsmasq-dhcp[306456]: read /var/lib/neutron/dhcp/785d6e65-4b4f-461e-b0b4-cb1d9085a3e7/opts Feb 23 04:54:04 localhost podman[307077]: 2026-02-23 09:54:04.547481259 +0000 UTC m=+0.047289897 container kill 852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 04:54:04 localhost systemd[1]: tmp-crun.dihIAz.mount: Deactivated successfully. Feb 23 04:54:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 23 04:54:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/954476728' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.842 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.844 280325 DEBUG nova.virt.libvirt.vif [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-23T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-280191745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(6),hidden=False,host='np0005626465.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-280191745',id=8,image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005626465.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005626465.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5e1135ba2724a69b072bbda0ea8476c',ramdisk_id='',reservation_id='r-2ba0qiul',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-739952540',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-739952540-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-23T09:53:58Z,user_data=None,user_id='0b7edff084ac4cda88d2d8f5182da779',uuid=78070789-b766-4674-b4e1-8040cbf7346b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.845 280325 DEBUG nova.network.os_vif_util [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Converting VIF {"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.846 280325 DEBUG nova.network.os_vif_util [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c0:be,bridge_name='br-int',has_traffic_filtering=True,id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b,network=Network(a5e383fe-b918-4723-9dbc-32201feec87d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap68f77b5f-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.849 280325 DEBUG nova.objects.instance [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lazy-loading 'pci_devices' on Instance uuid 78070789-b766-4674-b4e1-8040cbf7346b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.868 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] End _get_guest_xml xml= Feb 23 04:54:04 localhost nova_compute[280321]: 78070789-b766-4674-b4e1-8040cbf7346b Feb 23 04:54:04 localhost nova_compute[280321]: instance-00000008 Feb 23 04:54:04 localhost nova_compute[280321]: 131072 Feb 23 04:54:04 localhost nova_compute[280321]: 1 Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: tempest-LiveAutoBlockMigrationV225Test-server-280191745 Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:03 Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: 128 Feb 23 04:54:04 localhost nova_compute[280321]: 1 Feb 23 04:54:04 localhost nova_compute[280321]: 0 Feb 23 04:54:04 localhost nova_compute[280321]: 0 Feb 23 04:54:04 localhost nova_compute[280321]: 1 Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: tempest-LiveAutoBlockMigrationV225Test-739952540-project-member Feb 23 04:54:04 localhost nova_compute[280321]: tempest-LiveAutoBlockMigrationV225Test-739952540 Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: RDO Feb 23 04:54:04 localhost nova_compute[280321]: OpenStack Compute Feb 23 04:54:04 localhost nova_compute[280321]: 27.5.2-0.20260220085704.5cfeecb.el9 Feb 23 04:54:04 localhost nova_compute[280321]: 78070789-b766-4674-b4e1-8040cbf7346b Feb 23 04:54:04 localhost nova_compute[280321]: 78070789-b766-4674-b4e1-8040cbf7346b Feb 23 04:54:04 localhost nova_compute[280321]: Virtual Machine Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: hvm Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: /dev/urandom Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: Feb 23 04:54:04 localhost nova_compute[280321]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.870 280325 DEBUG nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Preparing to wait for external event network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.870 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Acquiring lock "78070789-b766-4674-b4e1-8040cbf7346b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.871 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.871 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.872 280325 DEBUG nova.virt.libvirt.vif [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-23T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-280191745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(6),hidden=False,host='np0005626465.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-280191745',id=8,image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005626465.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005626465.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5e1135ba2724a69b072bbda0ea8476c',ramdisk_id='',reservation_id='r-2ba0qiul',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-739952540',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-739952540-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-23T09:53:58Z,user_data=None,user_id='0b7edff084ac4cda88d2d8f5182da779',uuid=78070789-b766-4674-b4e1-8040cbf7346b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.873 280325 DEBUG nova.network.os_vif_util [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Converting VIF {"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.874 280325 DEBUG nova.network.os_vif_util [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c0:be,bridge_name='br-int',has_traffic_filtering=True,id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b,network=Network(a5e383fe-b918-4723-9dbc-32201feec87d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap68f77b5f-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.874 280325 DEBUG os_vif [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c0:be,bridge_name='br-int',has_traffic_filtering=True,id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b,network=Network(a5e383fe-b918-4723-9dbc-32201feec87d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap68f77b5f-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Feb 23 04:54:04 localhost ovn_controller[155966]: 2026-02-23T09:54:04Z|00040|binding|INFO|Releasing lport 50bab87b-669c-4165-acde-bbd61d16beb7 from this chassis (sb_readonly=0) Feb 23 04:54:04 localhost ovn_controller[155966]: 2026-02-23T09:54:04Z|00041|binding|INFO|Setting lport 50bab87b-669c-4165-acde-bbd61d16beb7 down in Southbound Feb 23 04:54:04 localhost kernel: device tap50bab87b-66 left promiscuous mode Feb 23 04:54:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:04.899 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bba12cc9382b485789a88c5fc615cc96', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d345f9c9-fdaf-4f2f-a07b-06cdb5ba17a4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=50bab87b-669c-4165-acde-bbd61d16beb7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:04.901 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 50bab87b-669c-4165-acde-bbd61d16beb7 in datapath 785d6e65-4b4f-461e-b0b4-cb1d9085a3e7 unbound from our chassis#033[00m Feb 23 04:54:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:04.904 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:54:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:04.905 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca7475c-b316-4237-98b1-f28a262a671d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:04 localhost systemd[1]: tmp-crun.Jf6ylA.mount: Deactivated successfully. Feb 23 04:54:04 localhost dnsmasq[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/addn_hosts - 0 addresses Feb 23 04:54:04 localhost dnsmasq-dhcp[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/host Feb 23 04:54:04 localhost dnsmasq-dhcp[306595]: read /var/lib/neutron/dhcp/952a96fb-c5b6-453c-bd85-60bdea92095e/opts Feb 23 04:54:04 localhost podman[307138]: 2026-02-23 09:54:04.927846366 +0000 UTC m=+0.071268740 container kill 36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-952a96fb-c5b6-453c-bd85-60bdea92095e, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.935 280325 DEBUG ovsdbapp.backend.ovs_idl [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.935 280325 DEBUG ovsdbapp.backend.ovs_idl [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.935 280325 DEBUG ovsdbapp.backend.ovs_idl [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.935 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.936 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.936 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.937 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.938 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.941 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.952 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.953 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.953 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 23 04:54:04 localhost nova_compute[280321]: 2026-02-23 09:54:04.954 280325 INFO oslo.privsep.daemon [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpfplg9j30/privsep.sock']#033[00m Feb 23 04:54:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:54:05 Feb 23 04:54:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:54:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 04:54:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['images', 'manila_data', 'backups', 'volumes', 'vms', 'manila_metadata', '.mgr'] Feb 23 04:54:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 04:54:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v103: 177 pgs: 177 active+clean; 224 MiB data, 880 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.6 KiB/s wr, 38 op/s Feb 23 04:54:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:54:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:54:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:54:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:54:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:54:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00754096000837521 of space, bias 1.0, pg target 1.5056783483389167 quantized to 32 (current 32) Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:54:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019465818676716918 quantized to 16 (current 16) Feb 23 04:54:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:54:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:54:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:54:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:54:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:54:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:54:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:54:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:54:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:54:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:54:05 localhost ovn_controller[155966]: 2026-02-23T09:54:05Z|00042|binding|INFO|Releasing lport 6a90be9b-0721-4516-9ff9-bf27b6a45c1c from this chassis (sb_readonly=0) Feb 23 04:54:05 localhost kernel: device tap6a90be9b-07 left promiscuous mode Feb 23 04:54:05 localhost ovn_controller[155966]: 2026-02-23T09:54:05Z|00043|binding|INFO|Setting lport 6a90be9b-0721-4516-9ff9-bf27b6a45c1c down in Southbound Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.206 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:05.218 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-952a96fb-c5b6-453c-bd85-60bdea92095e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-952a96fb-c5b6-453c-bd85-60bdea92095e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '630f2b45468647c9971c461e6fe8aad9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d266ad9-f71f-4a7f-a7d1-454e3de9edd5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6a90be9b-0721-4516-9ff9-bf27b6a45c1c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:05.220 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 6a90be9b-0721-4516-9ff9-bf27b6a45c1c in datapath 952a96fb-c5b6-453c-bd85-60bdea92095e unbound from our chassis#033[00m Feb 23 04:54:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:05.223 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 952a96fb-c5b6-453c-bd85-60bdea92095e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:54:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:05.225 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b3c370-998e-4023-b9c7-8a1a2f6ae69e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.226 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.607 280325 INFO oslo.privsep.daemon [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Spawned new privsep daemon via rootwrap#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.500 307164 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.506 307164 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.509 307164 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.510 307164 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307164#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.877 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.879 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68f77b5f-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.880 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68f77b5f-9e, col_values=(('external_ids', {'iface-id': '68f77b5f-9ee1-445a-9bc9-8dae82293c2b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:c0:be', 'vm-uuid': '78070789-b766-4674-b4e1-8040cbf7346b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.883 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.887 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.889 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.891 280325 INFO os_vif [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c0:be,bridge_name='br-int',has_traffic_filtering=True,id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b,network=Network(a5e383fe-b918-4723-9dbc-32201feec87d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap68f77b5f-9e')#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.958 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.958 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.959 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] No VIF found with MAC fa:16:3e:eb:c0:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.959 280325 INFO nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Using config drive#033[00m Feb 23 04:54:05 localhost nova_compute[280321]: 2026-02-23 09:54:05.996 280325 DEBUG nova.storage.rbd_utils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] rbd image 78070789-b766-4674-b4e1-8040cbf7346b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.361 280325 INFO nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Creating config drive at /var/lib/nova/instances/78070789-b766-4674-b4e1-8040cbf7346b/disk.config#033[00m Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.367 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78070789-b766-4674-b4e1-8040cbf7346b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1mcbba8p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.495 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78070789-b766-4674-b4e1-8040cbf7346b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1mcbba8p" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.538 280325 DEBUG nova.storage.rbd_utils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] rbd image 78070789-b766-4674-b4e1-8040cbf7346b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.541 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78070789-b766-4674-b4e1-8040cbf7346b/disk.config 78070789-b766-4674-b4e1-8040cbf7346b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.743 280325 DEBUG oslo_concurrency.processutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78070789-b766-4674-b4e1-8040cbf7346b/disk.config 78070789-b766-4674-b4e1-8040cbf7346b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.743 280325 INFO nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Deleting local config drive /var/lib/nova/instances/78070789-b766-4674-b4e1-8040cbf7346b/disk.config because it was imported into RBD.#033[00m Feb 23 04:54:06 localhost systemd[1]: Started libvirt secret daemon. Feb 23 04:54:06 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Feb 23 04:54:06 localhost kernel: device tap68f77b5f-9e entered promiscuous mode Feb 23 04:54:06 localhost NetworkManager[5987]: [1771840446.8437] manager: (tap68f77b5f-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/16) Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.848 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:06 localhost systemd-udevd[307258]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:54:06 localhost ovn_controller[155966]: 2026-02-23T09:54:06Z|00044|binding|INFO|Claiming lport 68f77b5f-9ee1-445a-9bc9-8dae82293c2b for this chassis. Feb 23 04:54:06 localhost ovn_controller[155966]: 2026-02-23T09:54:06Z|00045|binding|INFO|68f77b5f-9ee1-445a-9bc9-8dae82293c2b: Claiming fa:16:3e:eb:c0:be 10.100.0.4 Feb 23 04:54:06 localhost ovn_controller[155966]: 2026-02-23T09:54:06Z|00046|binding|INFO|Claiming lport 1fc7da92-c93a-4191-b374-5aef0705e0ce for this chassis. Feb 23 04:54:06 localhost ovn_controller[155966]: 2026-02-23T09:54:06Z|00047|binding|INFO|1fc7da92-c93a-4191-b374-5aef0705e0ce: Claiming fa:16:3e:77:6d:80 19.80.0.16 Feb 23 04:54:06 localhost NetworkManager[5987]: [1771840446.8654] device (tap68f77b5f-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 23 04:54:06 localhost NetworkManager[5987]: [1771840446.8660] device (tap68f77b5f-9e): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Feb 23 04:54:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:06.861 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:6d:80 19.80.0.16'], port_security=['fa:16:3e:77:6d:80 19.80.0.16'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['68f77b5f-9ee1-445a-9bc9-8dae82293c2b'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1525156145', 'neutron:cidrs': '19.80.0.16/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4367d4b-271d-4a28-a878-d77074456171', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1525156145', 'neutron:project_id': 'b5e1135ba2724a69b072bbda0ea8476c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5e2da0ff-f592-42de-9188-06e3b0bca61b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=57c5c75f-3246-4a64-87cf-649ab7e0f2d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=1fc7da92-c93a-4191-b374-5aef0705e0ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:06.868 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:c0:be 10.100.0.4'], port_security=['fa:16:3e:eb:c0:be 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1619502580', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '78070789-b766-4674-b4e1-8040cbf7346b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5e383fe-b918-4723-9dbc-32201feec87d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1619502580', 'neutron:project_id': 'b5e1135ba2724a69b072bbda0ea8476c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5e2da0ff-f592-42de-9188-06e3b0bca61b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a4ae72f-8c09-4559-aec5-36314af9e25d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=68f77b5f-9ee1-445a-9bc9-8dae82293c2b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:06.870 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 1fc7da92-c93a-4191-b374-5aef0705e0ce in datapath c4367d4b-271d-4a28-a878-d77074456171 bound to our chassis#033[00m Feb 23 04:54:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:06.874 161842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4367d4b-271d-4a28-a878-d77074456171#033[00m Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.883 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:06 localhost ovn_controller[155966]: 2026-02-23T09:54:06Z|00048|binding|INFO|Setting lport 68f77b5f-9ee1-445a-9bc9-8dae82293c2b ovn-installed in OVS Feb 23 04:54:06 localhost ovn_controller[155966]: 2026-02-23T09:54:06Z|00049|binding|INFO|Setting lport 68f77b5f-9ee1-445a-9bc9-8dae82293c2b up in Southbound Feb 23 04:54:06 localhost ovn_controller[155966]: 2026-02-23T09:54:06Z|00050|binding|INFO|Setting lport 1fc7da92-c93a-4191-b374-5aef0705e0ce up in Southbound Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.887 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.903 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:06 localhost nova_compute[280321]: 2026-02-23 09:54:06.908 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:06 localhost systemd-machined[205673]: New machine qemu-1-instance-00000008. Feb 23 04:54:06 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000008. Feb 23 04:54:06 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e97 e97: 6 total, 6 up, 6 in Feb 23 04:54:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 8.5 MiB/s rd, 8.5 MiB/s wr, 221 op/s Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.270 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.271 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] VM Started (Lifecycle Event)#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.292 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.297 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.297 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] VM Paused (Lifecycle Event)#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.306 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[20a593fa-24c1-4d12-b2ab-7bf0c8614841]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.307 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4367d4b-21 in ovnmeta-c4367d4b-271d-4a28-a878-d77074456171 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.309 306186 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4367d4b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.309 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[8708ba52-6121-40fd-a36c-53823ebb89dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.311 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[c85a5d0a-3b99-4b8a-bbc3-05f12feb8fa3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.329 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.332 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc844f5-7cbd-47b1-8d6b-588468061ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.335 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.340 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.342 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[5f10ec0a-b3f9-4efe-b99d-0729c03b86b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.345 161842 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpb8x5xpn_/privsep.sock']#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.359 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 23 04:54:07 localhost nova_compute[280321]: 2026-02-23 09:54:07.913 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.984 161842 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.985 161842 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpb8x5xpn_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.871 307322 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.875 307322 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.877 307322 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.878 307322 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307322#033[00m Feb 23 04:54:07 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:07.988 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbb34d8-95cc-4df1-8b29-1611d31f0a97]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:08 localhost nova_compute[280321]: 2026-02-23 09:54:08.204 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:08.430 307322 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:08.430 307322 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:08.430 307322 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:08.602 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:08 localhost nova_compute[280321]: 2026-02-23 09:54:08.605 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:54:08 localhost dnsmasq[306595]: exiting on receipt of SIGTERM Feb 23 04:54:08 localhost systemd[1]: libpod-36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67.scope: Deactivated successfully. Feb 23 04:54:08 localhost podman[307343]: 2026-02-23 09:54:08.950607746 +0000 UTC m=+0.068303489 container kill 36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-952a96fb-c5b6-453c-bd85-60bdea92095e, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:54:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:08.976 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[259b5644-ffa4-4493-b883-ebde58c7b308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.001 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2ba6bd-745c-4807-83b4-a0fe148bdccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost NetworkManager[5987]: [1771840449.0018] manager: (tapc4367d4b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/17) Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.022 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[25a3dda9-9861-4e7c-89af-02fe4852e28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.024 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6f1fe6-0a45-424c-9a2c-5ead56b3117f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost podman[307380]: 2026-02-23 09:54:09.025575487 +0000 UTC m=+0.055235149 container died 36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-952a96fb-c5b6-453c-bd85-60bdea92095e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:54:09 localhost systemd[1]: tmp-crun.zN136I.mount: Deactivated successfully. Feb 23 04:54:09 localhost podman[307360]: 2026-02-23 09:54:09.034739458 +0000 UTC m=+0.102528446 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute) Feb 23 04:54:09 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapc4367d4b-21: link becomes ready Feb 23 04:54:09 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapc4367d4b-20: link becomes ready Feb 23 04:54:09 localhost NetworkManager[5987]: [1771840449.0476] device (tapc4367d4b-20): carrier: link connected Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.050 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b3645e-f6cc-4c47-a22a-ff7b9a6ef66c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.063 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4b0cbf-9678-4cf5-bc03-ab54d2c38439]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4367d4b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:aa:04:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1188019, 'reachable_time': 42644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307438, 'error': None, 'target': 'ovnmeta-c4367d4b-271d-4a28-a878-d77074456171', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost podman[307360]: 2026-02-23 09:54:09.069672025 +0000 UTC m=+0.137461003 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.077 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[49f6ee2f-396b-47d9-be0f-9cd1452e1f08]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:477'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1188019, 'tstamp': 1188019}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307444, 'error': None, 'target': 'ovnmeta-c4367d4b-271d-4a28-a878-d77074456171', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 8.5 MiB/s rd, 8.5 MiB/s wr, 205 op/s Feb 23 04:54:09 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.089 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[5b21c735-0a5c-4677-a06a-addd7ca073c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4367d4b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:aa:04:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1188019, 'reachable_time': 42644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307447, 'error': None, 'target': 'ovnmeta-c4367d4b-271d-4a28-a878-d77074456171', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost podman[307380]: 2026-02-23 09:54:09.103548371 +0000 UTC m=+0.133208043 container cleanup 36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-952a96fb-c5b6-453c-bd85-60bdea92095e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:54:09 localhost systemd[1]: libpod-conmon-36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67.scope: Deactivated successfully. Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.111 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[009057c1-a2e1-4bec-bccf-b19fbdd1fe37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost podman[307358]: 2026-02-23 09:54:09.070113358 +0000 UTC m=+0.136967377 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:54:09 localhost podman[307358]: 2026-02-23 09:54:09.156694405 +0000 UTC m=+0.223548354 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216) Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.161 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d1591334-d14d-4c1d-9bbf-45cef1b70177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.163 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4367d4b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.164 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.164 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4367d4b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:09 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:54:09 localhost kernel: device tapc4367d4b-20 entered promiscuous mode Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.170 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4367d4b-20, col_values=(('external_ids', {'iface-id': 'c580c9b8-a35b-42fb-bda8-24401f2a22e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:09 localhost nova_compute[280321]: 2026-02-23 09:54:09.175 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:09 localhost ovn_controller[155966]: 2026-02-23T09:54:09Z|00051|binding|INFO|Releasing lport c580c9b8-a35b-42fb-bda8-24401f2a22e1 from this chassis (sb_readonly=0) Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.184 161842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4367d4b-271d-4a28-a878-d77074456171.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4367d4b-271d-4a28-a878-d77074456171.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 23 04:54:09 localhost nova_compute[280321]: 2026-02-23 09:54:09.184 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.186 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[070e2f65-ab6b-491d-820f-68bad3c872a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.187 161842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: global Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: log /dev/log local0 debug Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: log-tag haproxy-metadata-proxy-c4367d4b-271d-4a28-a878-d77074456171 Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: user root Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: group root Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: maxconn 1024 Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: pidfile /var/lib/neutron/external/pids/c4367d4b-271d-4a28-a878-d77074456171.pid.haproxy Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: daemon Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: defaults Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: log global Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: mode http Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: option httplog Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: option dontlognull Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: option http-server-close Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: option forwardfor Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: retries 3 Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: timeout http-request 30s Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: timeout connect 30s Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: timeout client 32s Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: timeout server 32s Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: timeout http-keep-alive 30s Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: listen listener Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: bind 169.254.169.254:80 Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: server metadata /var/lib/neutron/metadata_proxy Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: http-request add-header X-OVN-Network-ID c4367d4b-271d-4a28-a878-d77074456171 Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.188 161842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4367d4b-271d-4a28-a878-d77074456171', 'env', 'PROCESS_TAG=haproxy-c4367d4b-271d-4a28-a878-d77074456171', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4367d4b-271d-4a28-a878-d77074456171.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 23 04:54:09 localhost podman[307395]: 2026-02-23 09:54:09.210708086 +0000 UTC m=+0.230739664 container remove 36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-952a96fb-c5b6-453c-bd85-60bdea92095e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 23 04:54:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:09.247 263679 INFO neutron.agent.dhcp.agent [None req-0ccefb15-6611-4d47-8505-b6cfaea5ac02 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:09.248 263679 INFO neutron.agent.dhcp.agent [None req-0ccefb15-6611-4d47-8505-b6cfaea5ac02 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:09 localhost dnsmasq[306456]: exiting on receipt of SIGTERM Feb 23 04:54:09 localhost podman[307458]: 2026-02-23 09:54:09.261979364 +0000 UTC m=+0.111899532 container kill 852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:54:09 localhost systemd[1]: libpod-852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1.scope: Deactivated successfully. Feb 23 04:54:09 localhost podman[307487]: 2026-02-23 09:54:09.332550421 +0000 UTC m=+0.047230685 container died 852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:54:09 localhost podman[307487]: 2026-02-23 09:54:09.372281635 +0000 UTC m=+0.086961869 container remove 852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-785d6e65-4b4f-461e-b0b4-cb1d9085a3e7, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:54:09 localhost systemd[1]: libpod-conmon-852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1.scope: Deactivated successfully. Feb 23 04:54:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:09.395 263679 INFO neutron.agent.dhcp.agent [None req-bb35adaa-9fad-4b48-9ab3-8a53a41fa4b5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:09 localhost podman[307529]: Feb 23 04:54:09 localhost podman[307529]: 2026-02-23 09:54:09.609041563 +0000 UTC m=+0.089377613 container create d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:54:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:09.628 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:09 localhost systemd[1]: Started libpod-conmon-d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc.scope. Feb 23 04:54:09 localhost systemd[1]: Started libcrun container. Feb 23 04:54:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb088465ad52a4869c01c87802fd6f1360483a6eac213b24099c8c2080c18650/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:54:09 localhost podman[307529]: 2026-02-23 09:54:09.563919404 +0000 UTC m=+0.044255484 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 23 04:54:09 localhost podman[307529]: 2026-02-23 09:54:09.670173481 +0000 UTC m=+0.150509531 container init d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:54:09 localhost podman[307529]: 2026-02-23 09:54:09.678989951 +0000 UTC m=+0.159325991 container start d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:54:09 localhost neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171[307543]: [NOTICE] (307547) : New worker (307549) forked Feb 23 04:54:09 localhost neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171[307543]: [NOTICE] (307547) : Loading success. Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.736 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 68f77b5f-9ee1-445a-9bc9-8dae82293c2b in datapath a5e383fe-b918-4723-9dbc-32201feec87d unbound from our chassis#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.740 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port a01f5328-bafd-4c8a-823b-e9e79e3b2905 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.740 161842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a5e383fe-b918-4723-9dbc-32201feec87d#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.749 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[78eea00c-8cc1-45ee-9a96-e4af2f85b5d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.750 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa5e383fe-b1 in ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.752 306186 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa5e383fe-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.752 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3a5481-69c6-4edc-91c1-3dc89f456cc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.754 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ac2d44-d0ad-4adc-a5be-a78b5350a48f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.773 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe0443a-febc-4721-a384-94c531a3cffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.785 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[711478bc-26a2-4c00-8e37-7f3b265ddbcc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.809 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[e99ef6ef-1b1c-428e-af9a-ebeb9e110400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.815 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc6ad0e-4312-4c44-b2d8-140688f12965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost NetworkManager[5987]: [1771840449.8163] manager: (tapa5e383fe-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/18) Feb 23 04:54:09 localhost systemd-udevd[307420]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.845 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfb25c7-b795-474d-af1f-8429727bd983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.849 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[29948d4e-713e-496e-83e8-8cfb2854638a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapa5e383fe-b0: link becomes ready Feb 23 04:54:09 localhost NetworkManager[5987]: [1771840449.8689] device (tapa5e383fe-b0): carrier: link connected Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.872 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[769a912c-e54e-4ad2-8804-d71888326076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.888 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[cbcab42c-65a0-4dfe-a96c-6e98b9674fca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5e383fe-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:c3:e1:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1188101, 'reachable_time': 16121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307569, 'error': None, 'target': 'ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.902 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[be4112c9-9880-466b-a344-6d6453af0bf1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:e105'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1188101, 'tstamp': 1188101}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307570, 'error': None, 'target': 'ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.919 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[f626f80d-ecdf-4057-9ec2-a79186473c65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5e383fe-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:c3:e1:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1188101, 'reachable_time': 16121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307571, 'error': None, 'target': 'ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:09 localhost systemd[1]: var-lib-containers-storage-overlay-5d3fb9b4239bfb80e9845917fd0295eff0f42db721a21831fdd58df164bffc76-merged.mount: Deactivated successfully. Feb 23 04:54:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36deb46c20dc3641074b80549426ec3bae8a22f620bc1a459e616df46789ff67-userdata-shm.mount: Deactivated successfully. Feb 23 04:54:09 localhost systemd[1]: run-netns-qdhcp\x2d952a96fb\x2dc5b6\x2d453c\x2dbd85\x2d60bdea92095e.mount: Deactivated successfully. Feb 23 04:54:09 localhost systemd[1]: var-lib-containers-storage-overlay-fe4d30d92d3d8518ab63edee4c6d36c83e25c3f6f2c9b5c4986256c0cc0b5ab9-merged.mount: Deactivated successfully. Feb 23 04:54:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-852f9290c23be4322caf24ad4db6ac5e8a5aea9ff493ef0f9533726763e14aa1-userdata-shm.mount: Deactivated successfully. Feb 23 04:54:09 localhost systemd[1]: run-netns-qdhcp\x2d785d6e65\x2d4b4f\x2d461e\x2db0b4\x2dcb1d9085a3e7.mount: Deactivated successfully. Feb 23 04:54:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:09.949 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[17710e6b-3001-404b-84d4-0a09f83ffc84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:10.008 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[9108022c-384e-441d-8668-dcc79a5e73e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:10.009 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5e383fe-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:10.010 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:10.011 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5e383fe-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:10 localhost nova_compute[280321]: 2026-02-23 09:54:10.053 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:10 localhost kernel: device tapa5e383fe-b0 entered promiscuous mode Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:10.057 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa5e383fe-b0, col_values=(('external_ids', {'iface-id': 'fbd3fa71-c070-4434-b248-fbe0a6b27a91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:10 localhost nova_compute[280321]: 2026-02-23 09:54:10.059 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:10 localhost ovn_controller[155966]: 2026-02-23T09:54:10Z|00052|binding|INFO|Releasing lport fbd3fa71-c070-4434-b248-fbe0a6b27a91 from this chassis (sb_readonly=0) Feb 23 04:54:10 localhost nova_compute[280321]: 2026-02-23 09:54:10.071 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:10.073 161842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a5e383fe-b918-4723-9dbc-32201feec87d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a5e383fe-b918-4723-9dbc-32201feec87d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:10.074 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[970f85c2-c57c-4941-a88a-fdfafb9b360b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:10.075 161842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: global Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: log /dev/log local0 debug Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: log-tag haproxy-metadata-proxy-a5e383fe-b918-4723-9dbc-32201feec87d Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: user root Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: group root Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: maxconn 1024 Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: pidfile /var/lib/neutron/external/pids/a5e383fe-b918-4723-9dbc-32201feec87d.pid.haproxy Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: daemon Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: defaults Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: log global Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: mode http Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: option httplog Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: option dontlognull Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: option http-server-close Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: option forwardfor Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: retries 3 Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: timeout http-request 30s Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: timeout connect 30s Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: timeout client 32s Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: timeout server 32s Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: timeout http-keep-alive 30s Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: listen listener Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: bind 169.254.169.254:80 Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: server metadata /var/lib/neutron/metadata_proxy Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: http-request add-header X-OVN-Network-ID a5e383fe-b918-4723-9dbc-32201feec87d Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:10.076 161842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d', 'env', 'PROCESS_TAG=haproxy-a5e383fe-b918-4723-9dbc-32201feec87d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a5e383fe-b918-4723-9dbc-32201feec87d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 23 04:54:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:10 localhost podman[307603]: Feb 23 04:54:10 localhost podman[307603]: 2026-02-23 09:54:10.494079857 +0000 UTC m=+0.090147417 container create 253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:54:10 localhost systemd[1]: Started libpod-conmon-253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb.scope. Feb 23 04:54:10 localhost systemd[1]: tmp-crun.jfJeRD.mount: Deactivated successfully. Feb 23 04:54:10 localhost podman[307603]: 2026-02-23 09:54:10.45228519 +0000 UTC m=+0.048352780 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 23 04:54:10 localhost systemd[1]: Started libcrun container. Feb 23 04:54:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1cb21f60d2dd5a91d409a3128a46335a7f6e1e7e74bf01eaa80d31d70faab4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:54:10 localhost podman[307603]: 2026-02-23 09:54:10.578358424 +0000 UTC m=+0.174425994 container init 253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:54:10 localhost podman[307603]: 2026-02-23 09:54:10.587228375 +0000 UTC m=+0.183295935 container start 253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:54:10 localhost neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d[307617]: [NOTICE] (307621) : New worker (307623) forked Feb 23 04:54:10 localhost neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d[307617]: [NOTICE] (307621) : Loading success. Feb 23 04:54:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:10.651 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:54:10 localhost nova_compute[280321]: 2026-02-23 09:54:10.883 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 306 MiB data, 989 MiB used, 41 GiB / 42 GiB avail; 8.1 MiB/s rd, 7.3 MiB/s wr, 202 op/s Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.551 280325 DEBUG nova.compute.manager [req-3cf38f4f-5b5b-442e-9629-0aeee3a620a0 req-a2bcbce7-1dcd-473e-8b81-74ca1635d487 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received event network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.551 280325 DEBUG oslo_concurrency.lockutils [req-3cf38f4f-5b5b-442e-9629-0aeee3a620a0 req-a2bcbce7-1dcd-473e-8b81-74ca1635d487 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "78070789-b766-4674-b4e1-8040cbf7346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.553 280325 DEBUG oslo_concurrency.lockutils [req-3cf38f4f-5b5b-442e-9629-0aeee3a620a0 req-a2bcbce7-1dcd-473e-8b81-74ca1635d487 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.553 280325 DEBUG oslo_concurrency.lockutils [req-3cf38f4f-5b5b-442e-9629-0aeee3a620a0 req-a2bcbce7-1dcd-473e-8b81-74ca1635d487 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.554 280325 DEBUG nova.compute.manager [req-3cf38f4f-5b5b-442e-9629-0aeee3a620a0 req-a2bcbce7-1dcd-473e-8b81-74ca1635d487 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Processing event network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.555 280325 DEBUG nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.591 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.592 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] VM Resumed (Lifecycle Event)#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.594 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.598 280325 INFO nova.virt.libvirt.driver [-] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Instance spawned successfully.#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.598 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.622 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.633 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.638 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.639 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.639 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.640 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.641 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.642 280325 DEBUG nova.virt.libvirt.driver [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.652 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.723 280325 INFO nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Took 12.83 seconds to spawn the instance on the hypervisor.#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.725 280325 DEBUG nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.807 280325 INFO nova.compute.manager [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Took 13.78 seconds to build instance.#033[00m Feb 23 04:54:11 localhost nova_compute[280321]: 2026-02-23 09:54:11.832 280325 DEBUG oslo_concurrency.lockutils [None req-5ecbccc2-4c13-4b82-842b-2f4f6207e920 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 13.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:12 localhost nova_compute[280321]: 2026-02-23 09:54:12.330 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:12 localhost podman[241086]: time="2026-02-23T09:54:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:54:12 localhost podman[241086]: @ - - [23/Feb/2026:09:54:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158268 "" "Go-http-client/1.1" Feb 23 04:54:12 localhost podman[241086]: @ - - [23/Feb/2026:09:54:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19228 "" "Go-http-client/1.1" Feb 23 04:54:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 9.1 MiB/s rd, 6.8 MiB/s wr, 260 op/s Feb 23 04:54:13 localhost nova_compute[280321]: 2026-02-23 09:54:13.613 280325 DEBUG nova.compute.manager [req-46e4d322-ca55-4f7f-85c2-99c69ef2ab4d req-20df2c0c-dbfe-44eb-ab30-89aafda61478 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received event network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:54:13 localhost nova_compute[280321]: 2026-02-23 09:54:13.614 280325 DEBUG oslo_concurrency.lockutils [req-46e4d322-ca55-4f7f-85c2-99c69ef2ab4d req-20df2c0c-dbfe-44eb-ab30-89aafda61478 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "78070789-b766-4674-b4e1-8040cbf7346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:13 localhost nova_compute[280321]: 2026-02-23 09:54:13.614 280325 DEBUG oslo_concurrency.lockutils [req-46e4d322-ca55-4f7f-85c2-99c69ef2ab4d req-20df2c0c-dbfe-44eb-ab30-89aafda61478 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:13 localhost nova_compute[280321]: 2026-02-23 09:54:13.615 280325 DEBUG oslo_concurrency.lockutils [req-46e4d322-ca55-4f7f-85c2-99c69ef2ab4d req-20df2c0c-dbfe-44eb-ab30-89aafda61478 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:13 localhost nova_compute[280321]: 2026-02-23 09:54:13.615 280325 DEBUG nova.compute.manager [req-46e4d322-ca55-4f7f-85c2-99c69ef2ab4d req-20df2c0c-dbfe-44eb-ab30-89aafda61478 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] No waiting events found dispatching network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 23 04:54:13 localhost nova_compute[280321]: 2026-02-23 09:54:13.615 280325 WARNING nova.compute.manager [req-46e4d322-ca55-4f7f-85c2-99c69ef2ab4d req-20df2c0c-dbfe-44eb-ab30-89aafda61478 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received unexpected event network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b for instance with vm_state active and task_state None.#033[00m Feb 23 04:54:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:54:13 localhost systemd[1]: tmp-crun.1cm725.mount: Deactivated successfully. Feb 23 04:54:14 localhost podman[307632]: 2026-02-23 09:54:14.004165906 +0000 UTC m=+0.081426330 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:54:14 localhost podman[307632]: 2026-02-23 09:54:14.013875473 +0000 UTC m=+0.091135917 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:54:14 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:54:14 localhost nova_compute[280321]: 2026-02-23 09:54:14.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:14 localhost nova_compute[280321]: 2026-02-23 09:54:14.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 23 04:54:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v109: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 9.1 MiB/s rd, 6.8 MiB/s wr, 260 op/s Feb 23 04:54:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:15 localhost nova_compute[280321]: 2026-02-23 09:54:15.621 280325 DEBUG nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Check if temp file /var/lib/nova/instances/tmpf2pbtgz8 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m Feb 23 04:54:15 localhost nova_compute[280321]: 2026-02-23 09:54:15.623 280325 DEBUG nova.compute.manager [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] source check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpf2pbtgz8',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='78070789-b766-4674-b4e1-8040cbf7346b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m Feb 23 04:54:15 localhost nova_compute[280321]: 2026-02-23 09:54:15.917 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:16 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:54:16 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:54:16 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:54:16 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:54:16 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:54:16 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 6737fa47-363e-4ad2-9226-489063b9e533 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:54:16 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 6737fa47-363e-4ad2-9226-489063b9e533 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:54:16 localhost ceph-mgr[285904]: [progress INFO root] Completed event 6737fa47-363e-4ad2-9226-489063b9e533 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 04:54:16 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:54:16 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:54:16 localhost nova_compute[280321]: 2026-02-23 09:54:16.680 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:54:16 localhost nova_compute[280321]: 2026-02-23 09:54:16.681 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:54:16 localhost nova_compute[280321]: 2026-02-23 09:54:16.688 280325 INFO nova.compute.rpcapi [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Feb 23 04:54:16 localhost nova_compute[280321]: 2026-02-23 09:54:16.689 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:54:16 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e98 e98: 6 total, 6 up, 6 in Feb 23 04:54:16 localhost nova_compute[280321]: 2026-02-23 09:54:16.907 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:16 localhost nova_compute[280321]: 2026-02-23 09:54:16.927 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:16 localhost nova_compute[280321]: 2026-02-23 09:54:16.928 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:16 localhost nova_compute[280321]: 2026-02-23 09:54:16.929 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:16 localhost nova_compute[280321]: 2026-02-23 09:54:16.929 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:54:16 localhost nova_compute[280321]: 2026-02-23 09:54:16.930 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 29 KiB/s wr, 237 op/s Feb 23 04:54:17 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:54:17 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.368 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:54:17 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2975642150' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.430 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.508 280325 DEBUG nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.509 280325 DEBUG nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Feb 23 04:54:17 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:17.578 263679 INFO neutron.agent.linux.ip_lib [None req-81fe7c54-ba66-43e6-9e17-028e11fcf7f9 - - - - - -] Device tap35b7f771-3e cannot be used as it has no MAC address#033[00m Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.611 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:17 localhost kernel: device tap35b7f771-3e entered promiscuous mode Feb 23 04:54:17 localhost NetworkManager[5987]: [1771840457.6202] manager: (tap35b7f771-3e): new Generic device (/org/freedesktop/NetworkManager/Devices/19) Feb 23 04:54:17 localhost ovn_controller[155966]: 2026-02-23T09:54:17Z|00053|binding|INFO|Claiming lport 35b7f771-3e84-413e-826f-9e48732fdd8c for this chassis. Feb 23 04:54:17 localhost ovn_controller[155966]: 2026-02-23T09:54:17Z|00054|binding|INFO|35b7f771-3e84-413e-826f-9e48732fdd8c: Claiming unknown Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.621 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:17 localhost systemd-udevd[307773]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:54:17 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:17.631 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-6bb5e4e9-2e98-4062-b87c-05bbb2af6730', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6bb5e4e9-2e98-4062-b87c-05bbb2af6730', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd3f218228784e2199b649265db8d96a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77141465-eb12-4215-bd12-e05b092454c8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=35b7f771-3e84-413e-826f-9e48732fdd8c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:17 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:17.632 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 35b7f771-3e84-413e-826f-9e48732fdd8c in datapath 6bb5e4e9-2e98-4062-b87c-05bbb2af6730 bound to our chassis#033[00m Feb 23 04:54:17 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:17.634 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port c37d2d1b-3fbb-4d64-8577-29845b6ab4d4 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:54:17 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:17.635 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6bb5e4e9-2e98-4062-b87c-05bbb2af6730, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:54:17 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:17.635 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8ec25b-01ab-4f4f-bdb3-4610cd5868ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:17 localhost journal[229268]: ethtool ioctl error on tap35b7f771-3e: No such device Feb 23 04:54:17 localhost ovn_controller[155966]: 2026-02-23T09:54:17Z|00055|binding|INFO|Setting lport 35b7f771-3e84-413e-826f-9e48732fdd8c ovn-installed in OVS Feb 23 04:54:17 localhost ovn_controller[155966]: 2026-02-23T09:54:17Z|00056|binding|INFO|Setting lport 35b7f771-3e84-413e-826f-9e48732fdd8c up in Southbound Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.658 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:17 localhost journal[229268]: ethtool ioctl error on tap35b7f771-3e: No such device Feb 23 04:54:17 localhost journal[229268]: ethtool ioctl error on tap35b7f771-3e: No such device Feb 23 04:54:17 localhost journal[229268]: ethtool ioctl error on tap35b7f771-3e: No such device Feb 23 04:54:17 localhost journal[229268]: ethtool ioctl error on tap35b7f771-3e: No such device Feb 23 04:54:17 localhost journal[229268]: ethtool ioctl error on tap35b7f771-3e: No such device Feb 23 04:54:17 localhost journal[229268]: ethtool ioctl error on tap35b7f771-3e: No such device Feb 23 04:54:17 localhost journal[229268]: ethtool ioctl error on tap35b7f771-3e: No such device Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.716 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.780 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.788 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11602MB free_disk=41.637474060058594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.788 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.789 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:17 localhost nova_compute[280321]: 2026-02-23 09:54:17.843 280325 INFO nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Updating resource usage from migration 6319222c-9712-437a-a9af-97cacce27d19#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.034 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Migration 6319222c-9712-437a-a9af-97cacce27d19 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.036 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.036 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.123 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing inventories for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.145 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating ProviderTree inventory for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.146 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.244 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing aggregate associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.266 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing trait associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, traits: HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SHA,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NODE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_ABM,HW_CPU_X86_AESNI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.320 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:18 localhost podman[307846]: Feb 23 04:54:18 localhost podman[307846]: 2026-02-23 09:54:18.534169812 +0000 UTC m=+0.095077648 container create 8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6bb5e4e9-2e98-4062-b87c-05bbb2af6730, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:54:18 localhost systemd[1]: Started libpod-conmon-8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e.scope. Feb 23 04:54:18 localhost systemd[1]: Started libcrun container. Feb 23 04:54:18 localhost podman[307846]: 2026-02-23 09:54:18.492743615 +0000 UTC m=+0.053651491 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:54:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2e9f0c0c85db93f88b73d26579751cf33e73d7ddc2a7c1fc9d1a83977015e9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:54:18 localhost podman[307846]: 2026-02-23 09:54:18.602456879 +0000 UTC m=+0.163364725 container init 8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6bb5e4e9-2e98-4062-b87c-05bbb2af6730, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260216) Feb 23 04:54:18 localhost podman[307846]: 2026-02-23 09:54:18.612674141 +0000 UTC m=+0.173581977 container start 8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6bb5e4e9-2e98-4062-b87c-05bbb2af6730, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:54:18 localhost dnsmasq[307882]: started, version 2.85 cachesize 150 Feb 23 04:54:18 localhost dnsmasq[307882]: DNS service limited to local subnets Feb 23 04:54:18 localhost dnsmasq[307882]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:54:18 localhost dnsmasq[307882]: warning: no upstream servers configured Feb 23 04:54:18 localhost dnsmasq-dhcp[307882]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:54:18 localhost dnsmasq[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/addn_hosts - 0 addresses Feb 23 04:54:18 localhost dnsmasq-dhcp[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/host Feb 23 04:54:18 localhost dnsmasq-dhcp[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/opts Feb 23 04:54:18 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:18.748 263679 INFO neutron.agent.dhcp.agent [None req-3ace3aab-84b6-4f7b-9cd2-c70da54d50b6 - - - - - -] DHCP configuration for ports {'301ea07b-19f3-4e92-823d-709d4e93fb75'} is completed#033[00m Feb 23 04:54:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:54:18 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1652408262' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.787 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.795 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.873 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updated inventory for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with generation 5 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.877 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 generation from 5 to 6 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.878 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.908 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:54:18 localhost nova_compute[280321]: 2026-02-23 09:54:18.909 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v112: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 29 KiB/s wr, 237 op/s Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.489 280325 DEBUG nova.compute.manager [req-461f6d43-1051-438d-bbad-f7a27be7d5e1 req-c68858d3-cf38-4cd8-b8f3-f4442ba6862f 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received event network-vif-unplugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.490 280325 DEBUG oslo_concurrency.lockutils [req-461f6d43-1051-438d-bbad-f7a27be7d5e1 req-c68858d3-cf38-4cd8-b8f3-f4442ba6862f 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "78070789-b766-4674-b4e1-8040cbf7346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.491 280325 DEBUG oslo_concurrency.lockutils [req-461f6d43-1051-438d-bbad-f7a27be7d5e1 req-c68858d3-cf38-4cd8-b8f3-f4442ba6862f 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.492 280325 DEBUG oslo_concurrency.lockutils [req-461f6d43-1051-438d-bbad-f7a27be7d5e1 req-c68858d3-cf38-4cd8-b8f3-f4442ba6862f 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.492 280325 DEBUG nova.compute.manager [req-461f6d43-1051-438d-bbad-f7a27be7d5e1 req-c68858d3-cf38-4cd8-b8f3-f4442ba6862f 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] No waiting events found dispatching network-vif-unplugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.493 280325 DEBUG nova.compute.manager [req-461f6d43-1051-438d-bbad-f7a27be7d5e1 req-c68858d3-cf38-4cd8-b8f3-f4442ba6862f 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received event network-vif-unplugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.894 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.894 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.895 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.917 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.918 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquired lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.918 280325 DEBUG nova.network.neutron [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Feb 23 04:54:19 localhost nova_compute[280321]: 2026-02-23 09:54:19.918 280325 DEBUG nova.objects.instance [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78070789-b766-4674-b4e1-8040cbf7346b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:20 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:54:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:54:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:20 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:20.654 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:20 localhost nova_compute[280321]: 2026-02-23 09:54:20.959 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 273 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 4.5 MiB/s rd, 20 KiB/s wr, 202 op/s Feb 23 04:54:21 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:21.250 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:54:20Z, description=, device_id=94d0ce77-2b53-42fd-afe5-4148bd1ff8e1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cf6be7d3-5f8e-4782-af6c-d99500219c9f, ip_allocation=immediate, mac_address=fa:16:3e:a8:58:2e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:54:14Z, description=, dns_domain=, id=6bb5e4e9-2e98-4062-b87c-05bbb2af6730, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-802018938-network, port_security_enabled=True, project_id=cd3f218228784e2199b649265db8d96a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27027, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=576, status=ACTIVE, subnets=['05ef1fee-2688-490a-8079-a173866318b4'], tags=[], tenant_id=cd3f218228784e2199b649265db8d96a, updated_at=2026-02-23T09:54:15Z, vlan_transparent=None, network_id=6bb5e4e9-2e98-4062-b87c-05bbb2af6730, port_security_enabled=False, project_id=cd3f218228784e2199b649265db8d96a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=613, status=DOWN, tags=[], tenant_id=cd3f218228784e2199b649265db8d96a, updated_at=2026-02-23T09:54:20Z on network 6bb5e4e9-2e98-4062-b87c-05bbb2af6730#033[00m Feb 23 04:54:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:54:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:54:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:54:21 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:54:21 localhost dnsmasq[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/addn_hosts - 1 addresses Feb 23 04:54:21 localhost podman[307902]: 2026-02-23 09:54:21.464536868 +0000 UTC m=+0.057848709 container kill 8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6bb5e4e9-2e98-4062-b87c-05bbb2af6730, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS) Feb 23 04:54:21 localhost dnsmasq-dhcp[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/host Feb 23 04:54:21 localhost dnsmasq-dhcp[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/opts Feb 23 04:54:21 localhost nova_compute[280321]: 2026-02-23 09:54:21.545 280325 DEBUG nova.compute.manager [req-1ec3be50-7d99-4440-8cde-6ef7f8349554 req-74122259-0bb9-4462-9c84-3de7db46a6ac 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received event network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:54:21 localhost nova_compute[280321]: 2026-02-23 09:54:21.546 280325 DEBUG oslo_concurrency.lockutils [req-1ec3be50-7d99-4440-8cde-6ef7f8349554 req-74122259-0bb9-4462-9c84-3de7db46a6ac 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "78070789-b766-4674-b4e1-8040cbf7346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:21 localhost nova_compute[280321]: 2026-02-23 09:54:21.547 280325 DEBUG oslo_concurrency.lockutils [req-1ec3be50-7d99-4440-8cde-6ef7f8349554 req-74122259-0bb9-4462-9c84-3de7db46a6ac 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:21 localhost nova_compute[280321]: 2026-02-23 09:54:21.547 280325 DEBUG oslo_concurrency.lockutils [req-1ec3be50-7d99-4440-8cde-6ef7f8349554 req-74122259-0bb9-4462-9c84-3de7db46a6ac 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:21 localhost nova_compute[280321]: 2026-02-23 09:54:21.547 280325 DEBUG nova.compute.manager [req-1ec3be50-7d99-4440-8cde-6ef7f8349554 req-74122259-0bb9-4462-9c84-3de7db46a6ac 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] No waiting events found dispatching network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 23 04:54:21 localhost nova_compute[280321]: 2026-02-23 09:54:21.547 280325 WARNING nova.compute.manager [req-1ec3be50-7d99-4440-8cde-6ef7f8349554 req-74122259-0bb9-4462-9c84-3de7db46a6ac 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received unexpected event network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b for instance with vm_state active and task_state migrating.#033[00m Feb 23 04:54:21 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:21.728 263679 INFO neutron.agent.dhcp.agent [None req-f84a262c-5a1a-42a3-8651-3fb27674d956 - - - - - -] DHCP configuration for ports {'cf6be7d3-5f8e-4782-af6c-d99500219c9f'} is completed#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.090 280325 DEBUG nova.network.neutron [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Updating instance_info_cache with network_info: [{"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": null, "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {"bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"migrating_to": "np0005626466.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.117 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Releasing lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.117 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.117 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.118 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.118 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.118 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.119 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.379 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.419 280325 INFO nova.compute.manager [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Took 5.74 seconds for pre_live_migration on destination host np0005626466.localdomain.#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.420 280325 DEBUG nova.compute.manager [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.438 280325 DEBUG nova.compute.manager [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpf2pbtgz8',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='78070789-b766-4674-b4e1-8040cbf7346b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(6319222c-9712-437a-a9af-97cacce27d19),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.441 280325 DEBUG nova.objects.instance [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Lazy-loading 'migration_context' on Instance uuid 78070789-b766-4674-b4e1-8040cbf7346b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.442 280325 DEBUG nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.444 280325 DEBUG nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.444 280325 DEBUG nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m Feb 23 04:54:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e99 e99: 6 total, 6 up, 6 in Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.459 280325 DEBUG nova.virt.libvirt.vif [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-23T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-280191745',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(6),hidden=False,host='np0005626465.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-280191745',id=8,image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-23T09:54:11Z,launched_on='np0005626465.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005626465.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b5e1135ba2724a69b072bbda0ea8476c',ramdisk_id='',reservation_id='r-2ba0qiul',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-739952540',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-739952540-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2026-02-23T09:54:11Z,user_data=None,user_id='0b7edff084ac4cda88d2d8f5182da779',uuid=78070789-b766-4674-b4e1-8040cbf7346b,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.460 280325 DEBUG nova.network.os_vif_util [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Converting VIF {"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.462 280325 DEBUG nova.network.os_vif_util [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c0:be,bridge_name='br-int',has_traffic_filtering=True,id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b,network=Network(a5e383fe-b918-4723-9dbc-32201feec87d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap68f77b5f-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.463 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Updating guest XML with vif config: Feb 23 04:54:22 localhost nova_compute[280321]: Feb 23 04:54:22 localhost nova_compute[280321]: Feb 23 04:54:22 localhost nova_compute[280321]: Feb 23 04:54:22 localhost nova_compute[280321]: Feb 23 04:54:22 localhost nova_compute[280321]: Feb 23 04:54:22 localhost nova_compute[280321]: Feb 23 04:54:22 localhost nova_compute[280321]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.464 280325 DEBUG nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.946 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 23 04:54:22 localhost nova_compute[280321]: 2026-02-23 09:54:22.947 280325 INFO nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.026 280325 INFO nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m Feb 23 04:54:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 275 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 250 KiB/s wr, 174 op/s Feb 23 04:54:23 localhost ovn_controller[155966]: 2026-02-23T09:54:23Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:c0:be 10.100.0.4 Feb 23 04:54:23 localhost ovn_controller[155966]: 2026-02-23T09:54:23Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:c0:be 10.100.0.4 Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.529 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.530 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.603 280325 DEBUG nova.compute.manager [req-7c9b341c-b491-4af9-9f86-30070d3a563d req-4455ec8c-a3aa-4cf9-a7df-8a1f3a7a2a05 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received event network-changed-68f77b5f-9ee1-445a-9bc9-8dae82293c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.604 280325 DEBUG nova.compute.manager [req-7c9b341c-b491-4af9-9f86-30070d3a563d req-4455ec8c-a3aa-4cf9-a7df-8a1f3a7a2a05 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Refreshing instance network info cache due to event network-changed-68f77b5f-9ee1-445a-9bc9-8dae82293c2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.604 280325 DEBUG oslo_concurrency.lockutils [req-7c9b341c-b491-4af9-9f86-30070d3a563d req-4455ec8c-a3aa-4cf9-a7df-8a1f3a7a2a05 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.605 280325 DEBUG oslo_concurrency.lockutils [req-7c9b341c-b491-4af9-9f86-30070d3a563d req-4455ec8c-a3aa-4cf9-a7df-8a1f3a7a2a05 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquired lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.605 280325 DEBUG nova.network.neutron [req-7c9b341c-b491-4af9-9f86-30070d3a563d req-4455ec8c-a3aa-4cf9-a7df-8a1f3a7a2a05 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Refreshing network info cache for port 68f77b5f-9ee1-445a-9bc9-8dae82293c2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:23 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:23.955 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:54:20Z, description=, device_id=94d0ce77-2b53-42fd-afe5-4148bd1ff8e1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cf6be7d3-5f8e-4782-af6c-d99500219c9f, ip_allocation=immediate, mac_address=fa:16:3e:a8:58:2e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:54:14Z, description=, dns_domain=, id=6bb5e4e9-2e98-4062-b87c-05bbb2af6730, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-802018938-network, port_security_enabled=True, project_id=cd3f218228784e2199b649265db8d96a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27027, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=576, status=ACTIVE, subnets=['05ef1fee-2688-490a-8079-a173866318b4'], tags=[], tenant_id=cd3f218228784e2199b649265db8d96a, updated_at=2026-02-23T09:54:15Z, vlan_transparent=None, network_id=6bb5e4e9-2e98-4062-b87c-05bbb2af6730, port_security_enabled=False, project_id=cd3f218228784e2199b649265db8d96a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=613, status=DOWN, tags=[], tenant_id=cd3f218228784e2199b649265db8d96a, updated_at=2026-02-23T09:54:20Z on network 6bb5e4e9-2e98-4062-b87c-05bbb2af6730#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.968 280325 DEBUG nova.network.neutron [req-7c9b341c-b491-4af9-9f86-30070d3a563d req-4455ec8c-a3aa-4cf9-a7df-8a1f3a7a2a05 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Updated VIF entry in instance network info cache for port 68f77b5f-9ee1-445a-9bc9-8dae82293c2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.969 280325 DEBUG nova.network.neutron [req-7c9b341c-b491-4af9-9f86-30070d3a563d req-4455ec8c-a3aa-4cf9-a7df-8a1f3a7a2a05 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Updating instance_info_cache with network_info: [{"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005626466.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:54:23 localhost nova_compute[280321]: 2026-02-23 09:54:23.990 280325 DEBUG oslo_concurrency.lockutils [req-7c9b341c-b491-4af9-9f86-30070d3a563d req-4455ec8c-a3aa-4cf9-a7df-8a1f3a7a2a05 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Releasing lock "refresh_cache-78070789-b766-4674-b4e1-8040cbf7346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:54:24 localhost nova_compute[280321]: 2026-02-23 09:54:24.032 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 23 04:54:24 localhost nova_compute[280321]: 2026-02-23 09:54:24.032 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Feb 23 04:54:24 localhost dnsmasq[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/addn_hosts - 1 addresses Feb 23 04:54:24 localhost dnsmasq-dhcp[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/host Feb 23 04:54:24 localhost dnsmasq-dhcp[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/opts Feb 23 04:54:24 localhost systemd[1]: tmp-crun.cCHITc.mount: Deactivated successfully. Feb 23 04:54:24 localhost podman[307941]: 2026-02-23 09:54:24.16476183 +0000 UTC m=+0.063819452 container kill 8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6bb5e4e9-2e98-4062-b87c-05bbb2af6730, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:54:24 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:24.380 263679 INFO neutron.agent.dhcp.agent [None req-44628cb9-0ebe-471e-abbe-a72dae520ff4 - - - - - -] DHCP configuration for ports {'cf6be7d3-5f8e-4782-af6c-d99500219c9f'} is completed#033[00m Feb 23 04:54:24 localhost nova_compute[280321]: 2026-02-23 09:54:24.535 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 23 04:54:24 localhost nova_compute[280321]: 2026-02-23 09:54:24.536 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Feb 23 04:54:24 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e100 e100: 6 total, 6 up, 6 in Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.840328) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840464840394, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1352, "num_deletes": 254, "total_data_size": 1754837, "memory_usage": 1782304, "flush_reason": "Manual Compaction"} Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840464848589, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 1135721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19268, "largest_seqno": 20615, "table_properties": {"data_size": 1130174, "index_size": 2954, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13048, "raw_average_key_size": 21, "raw_value_size": 1118642, "raw_average_value_size": 1827, "num_data_blocks": 125, "num_entries": 612, "num_filter_entries": 612, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840391, "oldest_key_time": 1771840391, "file_creation_time": 1771840464, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 8309 microseconds, and 4371 cpu microseconds. Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.848637) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 1135721 bytes OK Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.848665) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.851103) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.851131) EVENT_LOG_v1 {"time_micros": 1771840464851124, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.851156) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1748359, prev total WAL file size 1748359, number of live WAL files 2. Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.852059) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(1109KB)], [30(17MB)] Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840464852114, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19642713, "oldest_snapshot_seqno": -1} Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12169 keys, 17876984 bytes, temperature: kUnknown Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840464915074, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 17876984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17808269, "index_size": 37190, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 327407, "raw_average_key_size": 26, "raw_value_size": 17601525, "raw_average_value_size": 1446, "num_data_blocks": 1414, "num_entries": 12169, "num_filter_entries": 12169, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840464, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.915635) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 17876984 bytes Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.918182) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 311.2 rd, 283.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 17.6 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(33.0) write-amplify(15.7) OK, records in: 12702, records dropped: 533 output_compression: NoCompression Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.918217) EVENT_LOG_v1 {"time_micros": 1771840464918202, "job": 16, "event": "compaction_finished", "compaction_time_micros": 63124, "compaction_time_cpu_micros": 32395, "output_level": 6, "num_output_files": 1, "total_output_size": 17876984, "num_input_records": 12702, "num_output_records": 12169, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840464918581, "job": 16, "event": "table_file_deletion", "file_number": 32} Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840464921413, "job": 16, "event": "table_file_deletion", "file_number": 30} Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.851978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.921606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.921613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.921617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.921620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:54:24 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:54:24.921623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:54:25 localhost nova_compute[280321]: 2026-02-23 09:54:25.039 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 23 04:54:25 localhost nova_compute[280321]: 2026-02-23 09:54:25.041 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Feb 23 04:54:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 177 active+clean; 275 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 135 KiB/s rd, 233 KiB/s wr, 23 op/s Feb 23 04:54:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:25 localhost nova_compute[280321]: 2026-02-23 09:54:25.545 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 23 04:54:25 localhost nova_compute[280321]: 2026-02-23 09:54:25.546 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Feb 23 04:54:25 localhost nova_compute[280321]: 2026-02-23 09:54:25.707 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:25 localhost nova_compute[280321]: 2026-02-23 09:54:25.961 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:26 localhost nova_compute[280321]: 2026-02-23 09:54:26.049 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 23 04:54:26 localhost nova_compute[280321]: 2026-02-23 09:54:26.050 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Feb 23 04:54:26 localhost nova_compute[280321]: 2026-02-23 09:54:26.554 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Current 50 elapsed 4 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 23 04:54:26 localhost nova_compute[280321]: 2026-02-23 09:54:26.555 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Feb 23 04:54:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:54:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:54:27 localhost podman[307965]: 2026-02-23 09:54:27.029471238 +0000 UTC m=+0.095816789 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.7) Feb 23 04:54:27 localhost podman[307965]: 2026-02-23 09:54:27.046944292 +0000 UTC m=+0.113289863 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.) Feb 23 04:54:27 localhost nova_compute[280321]: 2026-02-23 09:54:27.059 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Current 50 elapsed 4 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 23 04:54:27 localhost nova_compute[280321]: 2026-02-23 09:54:27.059 280325 DEBUG nova.virt.libvirt.migration [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Feb 23 04:54:27 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:54:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 379 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 6.4 MiB/s rd, 8.9 MiB/s wr, 202 op/s Feb 23 04:54:27 localhost systemd[1]: tmp-crun.EYvfif.mount: Deactivated successfully. Feb 23 04:54:27 localhost podman[307964]: 2026-02-23 09:54:27.132638542 +0000 UTC m=+0.204256775 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:54:27 localhost podman[307964]: 2026-02-23 09:54:27.145387922 +0000 UTC m=+0.217006165 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:54:27 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:54:27 localhost nova_compute[280321]: 2026-02-23 09:54:27.259 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:54:27 localhost nova_compute[280321]: 2026-02-23 09:54:27.259 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] VM Paused (Lifecycle Event)#033[00m Feb 23 04:54:27 localhost nova_compute[280321]: 2026-02-23 09:54:27.283 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:54:27 localhost nova_compute[280321]: 2026-02-23 09:54:27.288 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 23 04:54:27 localhost nova_compute[280321]: 2026-02-23 09:54:27.312 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m Feb 23 04:54:27 localhost nova_compute[280321]: 2026-02-23 09:54:27.381 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:28 localhost kernel: device tap68f77b5f-9e left promiscuous mode Feb 23 04:54:28 localhost NetworkManager[5987]: [1771840468.4444] device (tap68f77b5f-9e): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.455 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:28 localhost ovn_controller[155966]: 2026-02-23T09:54:28Z|00057|binding|INFO|Releasing lport 68f77b5f-9ee1-445a-9bc9-8dae82293c2b from this chassis (sb_readonly=0) Feb 23 04:54:28 localhost ovn_controller[155966]: 2026-02-23T09:54:28Z|00058|binding|INFO|Setting lport 68f77b5f-9ee1-445a-9bc9-8dae82293c2b down in Southbound Feb 23 04:54:28 localhost ovn_controller[155966]: 2026-02-23T09:54:28Z|00059|binding|INFO|Releasing lport 1fc7da92-c93a-4191-b374-5aef0705e0ce from this chassis (sb_readonly=0) Feb 23 04:54:28 localhost ovn_controller[155966]: 2026-02-23T09:54:28Z|00060|binding|INFO|Setting lport 1fc7da92-c93a-4191-b374-5aef0705e0ce down in Southbound Feb 23 04:54:28 localhost ovn_controller[155966]: 2026-02-23T09:54:28Z|00061|binding|INFO|Removing iface tap68f77b5f-9e ovn-installed in OVS Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.459 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:28 localhost dnsmasq[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/addn_hosts - 0 addresses Feb 23 04:54:28 localhost dnsmasq-dhcp[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/host Feb 23 04:54:28 localhost dnsmasq-dhcp[307882]: read /var/lib/neutron/dhcp/6bb5e4e9-2e98-4062-b87c-05bbb2af6730/opts Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.465 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:6d:80 19.80.0.16'], port_security=['fa:16:3e:77:6d:80 19.80.0.16'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['68f77b5f-9ee1-445a-9bc9-8dae82293c2b'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1525156145', 'neutron:cidrs': '19.80.0.16/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4367d4b-271d-4a28-a878-d77074456171', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1525156145', 'neutron:project_id': 'b5e1135ba2724a69b072bbda0ea8476c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '5e2da0ff-f592-42de-9188-06e3b0bca61b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=57c5c75f-3246-4a64-87cf-649ab7e0f2d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=1fc7da92-c93a-4191-b374-5aef0705e0ce) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:28 localhost ovn_controller[155966]: 2026-02-23T09:54:28Z|00062|binding|INFO|Releasing lport c580c9b8-a35b-42fb-bda8-24401f2a22e1 from this chassis (sb_readonly=0) Feb 23 04:54:28 localhost ovn_controller[155966]: 2026-02-23T09:54:28Z|00063|binding|INFO|Releasing lport fbd3fa71-c070-4434-b248-fbe0a6b27a91 from this chassis (sb_readonly=0) Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.467 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:c0:be 10.100.0.4'], port_security=['fa:16:3e:eb:c0:be 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain,np0005626466.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': '0983c4cd-1476-49af-89e0-3187e18b9de6'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1619502580', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '78070789-b766-4674-b4e1-8040cbf7346b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5e383fe-b918-4723-9dbc-32201feec87d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1619502580', 'neutron:project_id': 'b5e1135ba2724a69b072bbda0ea8476c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5e2da0ff-f592-42de-9188-06e3b0bca61b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a4ae72f-8c09-4559-aec5-36314af9e25d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=68f77b5f-9ee1-445a-9bc9-8dae82293c2b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.468 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 1fc7da92-c93a-4191-b374-5aef0705e0ce in datapath c4367d4b-271d-4a28-a878-d77074456171 unbound from our chassis#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.470 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4367d4b-271d-4a28-a878-d77074456171, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.471 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[36df7815-c216-4690-8ebc-937b649f48e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.472 161842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4367d4b-271d-4a28-a878-d77074456171 namespace which is not needed anymore#033[00m Feb 23 04:54:28 localhost podman[308023]: 2026-02-23 09:54:28.46957087 +0000 UTC m=+0.081372818 container kill 8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6bb5e4e9-2e98-4062-b87c-05bbb2af6730, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.492 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:28 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000008.scope: Deactivated successfully. Feb 23 04:54:28 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000008.scope: Consumed 11.635s CPU time. Feb 23 04:54:28 localhost systemd-machined[205673]: Machine qemu-1-instance-00000008 terminated. Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.523 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:28 localhost journal[228928]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/78070789-b766-4674-b4e1-8040cbf7346b_disk: No such file or directory Feb 23 04:54:28 localhost journal[228928]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/78070789-b766-4674-b4e1-8040cbf7346b_disk: No such file or directory Feb 23 04:54:28 localhost NetworkManager[5987]: [1771840468.5997] manager: (tap68f77b5f-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/20) Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.612 280325 DEBUG nova.virt.libvirt.guest [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.613 280325 INFO nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Migration operation has completed#033[00m Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.613 280325 INFO nova.compute.manager [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] _post_live_migration() is started..#033[00m Feb 23 04:54:28 localhost kernel: device tap35b7f771-3e left promiscuous mode Feb 23 04:54:28 localhost ovn_controller[155966]: 2026-02-23T09:54:28Z|00064|binding|INFO|Releasing lport 35b7f771-3e84-413e-826f-9e48732fdd8c from this chassis (sb_readonly=0) Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.630 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:28 localhost ovn_controller[155966]: 2026-02-23T09:54:28Z|00065|binding|INFO|Setting lport 35b7f771-3e84-413e-826f-9e48732fdd8c down in Southbound Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.637 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-6bb5e4e9-2e98-4062-b87c-05bbb2af6730', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6bb5e4e9-2e98-4062-b87c-05bbb2af6730', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd3f218228784e2199b649265db8d96a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77141465-eb12-4215-bd12-e05b092454c8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=35b7f771-3e84-413e-826f-9e48732fdd8c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.693 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:28 localhost systemd[1]: tmp-crun.yl7oic.mount: Deactivated successfully. Feb 23 04:54:28 localhost neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171[307543]: [NOTICE] (307547) : haproxy version is 2.8.14-c23fe91 Feb 23 04:54:28 localhost neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171[307543]: [NOTICE] (307547) : path to executable is /usr/sbin/haproxy Feb 23 04:54:28 localhost neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171[307543]: [WARNING] (307547) : Exiting Master process... Feb 23 04:54:28 localhost neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171[307543]: [ALERT] (307547) : Current worker (307549) exited with code 143 (Terminated) Feb 23 04:54:28 localhost neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171[307543]: [WARNING] (307547) : All workers exited. Exiting... (0) Feb 23 04:54:28 localhost systemd[1]: libpod-d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc.scope: Deactivated successfully. Feb 23 04:54:28 localhost podman[308068]: 2026-02-23 09:54:28.720833611 +0000 UTC m=+0.128063676 container died d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171, tcib_managed=true, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:54:28 localhost podman[308068]: 2026-02-23 09:54:28.74043744 +0000 UTC m=+0.147667495 container cleanup d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:54:28 localhost podman[308096]: 2026-02-23 09:54:28.778861675 +0000 UTC m=+0.050884717 container cleanup d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:54:28 localhost systemd[1]: libpod-conmon-d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc.scope: Deactivated successfully. Feb 23 04:54:28 localhost podman[308110]: 2026-02-23 09:54:28.817048372 +0000 UTC m=+0.061969345 container remove d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.821 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[704b8611-c2a7-4b38-952a-df8ad7733f6d]: (4, ('Mon Feb 23 09:54:28 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171 (d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc)\nd656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc\nMon Feb 23 09:54:28 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c4367d4b-271d-4a28-a878-d77074456171 (d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc)\nd656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.823 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[521885ec-6ba2-44d4-8539-8c0955e0946f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.824 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4367d4b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.826 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:28 localhost kernel: device tapc4367d4b-20 left promiscuous mode Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.838 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.841 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[63556d46-c372-4a68-8449-9f772d803188]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.857 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[0e54bfe1-c1a2-4311-b03c-3a7c8ccd402d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.859 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[39e2d1c1-2b03-40c7-9c13-504c23460cb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.872 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[dd127af1-a48f-4071-8f58-ae49bad8a2aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1188011, 'reachable_time': 26829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308131, 'error': None, 'target': 'ovnmeta-c4367d4b-271d-4a28-a878-d77074456171', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.879 161946 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4367d4b-271d-4a28-a878-d77074456171 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.880 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[e72284cb-88d4-452c-aa41-21118b1a43e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.881 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 68f77b5f-9ee1-445a-9bc9-8dae82293c2b in datapath a5e383fe-b918-4723-9dbc-32201feec87d unbound from our chassis#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.884 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port a01f5328-bafd-4c8a-823b-e9e79e3b2905 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.885 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5e383fe-b918-4723-9dbc-32201feec87d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.886 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d99ebb-0359-43c9-ad9b-b63b8797906b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:28.888 161842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d namespace which is not needed anymore#033[00m Feb 23 04:54:28 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e101 e101: 6 total, 6 up, 6 in Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.906 280325 DEBUG nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.907 280325 DEBUG nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m Feb 23 04:54:28 localhost nova_compute[280321]: 2026-02-23 09:54:28.907 280325 DEBUG nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m Feb 23 04:54:29 localhost neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d[307617]: [NOTICE] (307621) : haproxy version is 2.8.14-c23fe91 Feb 23 04:54:29 localhost neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d[307617]: [NOTICE] (307621) : path to executable is /usr/sbin/haproxy Feb 23 04:54:29 localhost neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d[307617]: [WARNING] (307621) : Exiting Master process... Feb 23 04:54:29 localhost systemd[1]: libpod-253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb.scope: Deactivated successfully. Feb 23 04:54:29 localhost neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d[307617]: [ALERT] (307621) : Current worker (307623) exited with code 143 (Terminated) Feb 23 04:54:29 localhost neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d[307617]: [WARNING] (307621) : All workers exited. Exiting... (0) Feb 23 04:54:29 localhost podman[308149]: 2026-02-23 09:54:29.07673574 +0000 UTC m=+0.073381434 container died 253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:54:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 379 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 217 op/s Feb 23 04:54:29 localhost podman[308149]: 2026-02-23 09:54:29.112267777 +0000 UTC m=+0.108913431 container cleanup 253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:54:29 localhost podman[308161]: 2026-02-23 09:54:29.148397771 +0000 UTC m=+0.063802431 container cleanup 253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 04:54:29 localhost systemd[1]: libpod-conmon-253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb.scope: Deactivated successfully. Feb 23 04:54:29 localhost podman[308176]: 2026-02-23 09:54:29.184791134 +0000 UTC m=+0.059014946 container remove 253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216) Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.189 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[2950e46d-c24b-4c5d-ba1c-de7d61b15a70]: (4, ('Mon Feb 23 09:54:28 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d (253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb)\n253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb\nMon Feb 23 09:54:29 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d (253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb)\n253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.190 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d54106b1-ee86-4ed9-93ee-2104e4e4ac70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.192 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5e383fe-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.194 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:29 localhost kernel: device tapa5e383fe-b0 left promiscuous mode Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.206 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.210 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[faa7d6fa-a6eb-4edb-9bec-bc10a6655822]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.227 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[9170f421-db24-45fd-bd98-541c957acf12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.229 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[c78b18b6-3eb3-48fd-9bcc-ea19087449c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.242 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[42504e76-3289-4cdc-a2f2-77a97b7c924a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1188095, 'reachable_time': 41863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308200, 'error': None, 'target': 'ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.244 161946 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a5e383fe-b918-4723-9dbc-32201feec87d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.244 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5e3ee0-a2c1-4e8d-be7a-820928fad85c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.245 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 35b7f771-3e84-413e-826f-9e48732fdd8c in datapath 6bb5e4e9-2e98-4062-b87c-05bbb2af6730 unbound from our chassis#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.249 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6bb5e4e9-2e98-4062-b87c-05bbb2af6730, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:54:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:29.249 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[19d54fbe-ec68-48a9-9445-dd354b722a9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.256 280325 DEBUG nova.compute.manager [req-bb83a3e4-c1d9-41e4-9c01-29748a78563a req-a903d678-702d-49b7-9452-28fc1f74b714 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received event network-vif-unplugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.256 280325 DEBUG oslo_concurrency.lockutils [req-bb83a3e4-c1d9-41e4-9c01-29748a78563a req-a903d678-702d-49b7-9452-28fc1f74b714 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "78070789-b766-4674-b4e1-8040cbf7346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.257 280325 DEBUG oslo_concurrency.lockutils [req-bb83a3e4-c1d9-41e4-9c01-29748a78563a req-a903d678-702d-49b7-9452-28fc1f74b714 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.257 280325 DEBUG oslo_concurrency.lockutils [req-bb83a3e4-c1d9-41e4-9c01-29748a78563a req-a903d678-702d-49b7-9452-28fc1f74b714 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.257 280325 DEBUG nova.compute.manager [req-bb83a3e4-c1d9-41e4-9c01-29748a78563a req-a903d678-702d-49b7-9452-28fc1f74b714 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] No waiting events found dispatching network-vif-unplugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.258 280325 DEBUG nova.compute.manager [req-bb83a3e4-c1d9-41e4-9c01-29748a78563a req-a903d678-702d-49b7-9452-28fc1f74b714 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received event network-vif-unplugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.271 280325 DEBUG nova.network.neutron [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Activated binding for port 68f77b5f-9ee1-445a-9bc9-8dae82293c2b and host np0005626466.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.272 280325 DEBUG nova.compute.manager [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.273 280325 DEBUG nova.virt.libvirt.vif [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-23T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-280191745',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(6),hidden=False,host='np0005626465.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-280191745',id=8,image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-23T09:54:11Z,launched_on='np0005626465.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005626465.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b5e1135ba2724a69b072bbda0ea8476c',ramdisk_id='',reservation_id='r-2ba0qiul',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-739952540',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-739952540-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2026-02-23T09:54:13Z,user_data=None,user_id='0b7edff084ac4cda88d2d8f5182da779',uuid=78070789-b766-4674-b4e1-8040cbf7346b,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.273 280325 DEBUG nova.network.os_vif_util [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Converting VIF {"id": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "address": "fa:16:3e:eb:c0:be", "network": {"id": "a5e383fe-b918-4723-9dbc-32201feec87d", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-719193790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "b5e1135ba2724a69b072bbda0ea8476c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f77b5f-9e", "ovs_interfaceid": "68f77b5f-9ee1-445a-9bc9-8dae82293c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.274 280325 DEBUG nova.network.os_vif_util [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c0:be,bridge_name='br-int',has_traffic_filtering=True,id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b,network=Network(a5e383fe-b918-4723-9dbc-32201feec87d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap68f77b5f-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.275 280325 DEBUG os_vif [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c0:be,bridge_name='br-int',has_traffic_filtering=True,id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b,network=Network(a5e383fe-b918-4723-9dbc-32201feec87d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap68f77b5f-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.278 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.279 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68f77b5f-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.280 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.281 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.285 280325 INFO os_vif [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c0:be,bridge_name='br-int',has_traffic_filtering=True,id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b,network=Network(a5e383fe-b918-4723-9dbc-32201feec87d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap68f77b5f-9e')#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.285 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.286 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.286 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.287 280325 DEBUG nova.compute.manager [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.287 280325 INFO nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Deleting instance files /var/lib/nova/instances/78070789-b766-4674-b4e1-8040cbf7346b_del#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.288 280325 INFO nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Deletion of /var/lib/nova/instances/78070789-b766-4674-b4e1-8040cbf7346b_del complete#033[00m Feb 23 04:54:29 localhost systemd[1]: var-lib-containers-storage-overlay-e1cb21f60d2dd5a91d409a3128a46335a7f6e1e7e74bf01eaa80d31d70faab4b-merged.mount: Deactivated successfully. Feb 23 04:54:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-253d45f55ebb823718e49920900746c7c36586af56f508cabbce84ad7e36c1fb-userdata-shm.mount: Deactivated successfully. Feb 23 04:54:29 localhost systemd[1]: run-netns-ovnmeta\x2da5e383fe\x2db918\x2d4723\x2d9dbc\x2d32201feec87d.mount: Deactivated successfully. Feb 23 04:54:29 localhost systemd[1]: var-lib-containers-storage-overlay-eb088465ad52a4869c01c87802fd6f1360483a6eac213b24099c8c2080c18650-merged.mount: Deactivated successfully. Feb 23 04:54:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d656be42e546ebf66d760cc90d330771df425829542e6c8f624f1da50e581dcc-userdata-shm.mount: Deactivated successfully. Feb 23 04:54:29 localhost systemd[1]: run-netns-ovnmeta\x2dc4367d4b\x2d271d\x2d4a28\x2da878\x2dd77074456171.mount: Deactivated successfully. Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.788 280325 DEBUG nova.compute.manager [req-cff9ebb2-ae1d-4fdc-9839-a3d80d5cceb1 req-2bfdf348-0810-4012-8ba7-f04093170d72 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received event network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.788 280325 DEBUG oslo_concurrency.lockutils [req-cff9ebb2-ae1d-4fdc-9839-a3d80d5cceb1 req-2bfdf348-0810-4012-8ba7-f04093170d72 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "78070789-b766-4674-b4e1-8040cbf7346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.789 280325 DEBUG oslo_concurrency.lockutils [req-cff9ebb2-ae1d-4fdc-9839-a3d80d5cceb1 req-2bfdf348-0810-4012-8ba7-f04093170d72 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.789 280325 DEBUG oslo_concurrency.lockutils [req-cff9ebb2-ae1d-4fdc-9839-a3d80d5cceb1 req-2bfdf348-0810-4012-8ba7-f04093170d72 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.789 280325 DEBUG nova.compute.manager [req-cff9ebb2-ae1d-4fdc-9839-a3d80d5cceb1 req-2bfdf348-0810-4012-8ba7-f04093170d72 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] No waiting events found dispatching network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.790 280325 WARNING nova.compute.manager [req-cff9ebb2-ae1d-4fdc-9839-a3d80d5cceb1 req-2bfdf348-0810-4012-8ba7-f04093170d72 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Received unexpected event network-vif-plugged-68f77b5f-9ee1-445a-9bc9-8dae82293c2b for instance with vm_state active and task_state migrating.#033[00m Feb 23 04:54:29 localhost nova_compute[280321]: 2026-02-23 09:54:29.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:54:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:30 localhost sshd[308201]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:54:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:54:30 localhost podman[308203]: 2026-02-23 09:54:30.984586971 +0000 UTC m=+0.064561955 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:54:31 localhost podman[308203]: 2026-02-23 09:54:31.019025363 +0000 UTC m=+0.099000337 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_id=ovn_controller) Feb 23 04:54:31 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:54:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v121: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 6.3 MiB/s rd, 8.8 MiB/s wr, 222 op/s Feb 23 04:54:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e102 e102: 6 total, 6 up, 6 in Feb 23 04:54:31 localhost openstack_network_exporter[243519]: ERROR 09:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:54:31 localhost openstack_network_exporter[243519]: Feb 23 04:54:31 localhost openstack_network_exporter[243519]: ERROR 09:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:54:31 localhost openstack_network_exporter[243519]: Feb 23 04:54:32 localhost nova_compute[280321]: 2026-02-23 09:54:32.423 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:32 localhost nova_compute[280321]: 2026-02-23 09:54:32.480 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Acquiring lock "78070789-b766-4674-b4e1-8040cbf7346b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:32 localhost nova_compute[280321]: 2026-02-23 09:54:32.481 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:32 localhost nova_compute[280321]: 2026-02-23 09:54:32.482 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Lock "78070789-b766-4674-b4e1-8040cbf7346b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:32 localhost nova_compute[280321]: 2026-02-23 09:54:32.504 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:32 localhost nova_compute[280321]: 2026-02-23 09:54:32.504 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:32 localhost nova_compute[280321]: 2026-02-23 09:54:32.505 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:32 localhost nova_compute[280321]: 2026-02-23 09:54:32.505 280325 DEBUG nova.compute.resource_tracker [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:54:32 localhost nova_compute[280321]: 2026-02-23 09:54:32.506 280325 DEBUG oslo_concurrency.processutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:32 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:54:32 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4269194252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:54:32 localhost nova_compute[280321]: 2026-02-23 09:54:32.939 280325 DEBUG oslo_concurrency.processutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 177 active+clean; 306 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 6.4 MiB/s rd, 8.8 MiB/s wr, 261 op/s Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.113 280325 WARNING nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.114 280325 DEBUG nova.compute.resource_tracker [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11763MB free_disk=41.63199996948242GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.115 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.115 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.152 280325 DEBUG nova.compute.resource_tracker [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Migration for instance 78070789-b766-4674-b4e1-8040cbf7346b refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.171 280325 DEBUG nova.compute.resource_tracker [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.196 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Acquiring lock "5a91ac0a-3acf-4d43-b746-cab698d45279" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.196 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lock "5a91ac0a-3acf-4d43-b746-cab698d45279" acquired by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.197 280325 INFO nova.compute.manager [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Unshelving#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.199 280325 DEBUG nova.compute.resource_tracker [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Migration 6319222c-9712-437a-a9af-97cacce27d19 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.215 280325 DEBUG nova.compute.resource_tracker [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Instance 5a91ac0a-3acf-4d43-b746-cab698d45279 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.216 280325 DEBUG nova.compute.resource_tracker [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.216 280325 DEBUG nova.compute.resource_tracker [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.274 280325 DEBUG oslo_concurrency.processutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.310 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:33 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:54:33 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/764192646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.704 280325 DEBUG oslo_concurrency.processutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.710 280325 DEBUG nova.compute.provider_tree [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.728 280325 DEBUG nova.scheduler.client.report [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.762 280325 DEBUG nova.compute.resource_tracker [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.762 280325 DEBUG oslo_concurrency.lockutils [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.764 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.766 280325 INFO nova.compute.manager [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Migrating instance to np0005626466.localdomain finished successfully.#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.768 280325 DEBUG nova.objects.instance [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5a91ac0a-3acf-4d43-b746-cab698d45279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.784 280325 DEBUG nova.objects.instance [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5a91ac0a-3acf-4d43-b746-cab698d45279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.802 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.802 280325 INFO nova.compute.claims [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Claim successful on node np0005626465.localdomain#033[00m Feb 23 04:54:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:33.822 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.873 280325 INFO nova.scheduler.client.report [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] Deleted allocation for migration 6319222c-9712-437a-a9af-97cacce27d19#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.874 280325 DEBUG nova.virt.libvirt.driver [None req-89fbed61-cd5c-4dd6-ab31-af1a88d8295a 00fd594b8b8249c9bc1eb9370bbe482c 02917a1d904f4889b9e244e1ebfc57ca - - default default] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m Feb 23 04:54:33 localhost nova_compute[280321]: 2026-02-23 09:54:33.910 280325 DEBUG oslo_concurrency.processutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:34 localhost nova_compute[280321]: 2026-02-23 09:54:34.281 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:54:34 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2116820524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:54:34 localhost nova_compute[280321]: 2026-02-23 09:54:34.376 280325 DEBUG oslo_concurrency.processutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:34 localhost nova_compute[280321]: 2026-02-23 09:54:34.383 280325 DEBUG nova.compute.provider_tree [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:54:34 localhost nova_compute[280321]: 2026-02-23 09:54:34.398 280325 DEBUG nova.scheduler.client.report [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:54:34 localhost nova_compute[280321]: 2026-02-23 09:54:34.428 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:34 localhost nova_compute[280321]: 2026-02-23 09:54:34.498 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Acquiring lock "refresh_cache-5a91ac0a-3acf-4d43-b746-cab698d45279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:54:34 localhost nova_compute[280321]: 2026-02-23 09:54:34.499 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Acquired lock "refresh_cache-5a91ac0a-3acf-4d43-b746-cab698d45279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:54:34 localhost nova_compute[280321]: 2026-02-23 09:54:34.500 280325 DEBUG nova.network.neutron [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 23 04:54:34 localhost nova_compute[280321]: 2026-02-23 09:54:34.590 280325 DEBUG nova.network.neutron [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.054 280325 DEBUG nova.network.neutron [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.070 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Releasing lock "refresh_cache-5a91ac0a-3acf-4d43-b746-cab698d45279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.072 280325 DEBUG nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.073 280325 INFO nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Creating image(s)#033[00m Feb 23 04:54:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 306 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 101 KiB/s rd, 105 KiB/s wr, 82 op/s Feb 23 04:54:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:54:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.113 280325 DEBUG nova.storage.rbd_utils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] rbd image 5a91ac0a-3acf-4d43-b746-cab698d45279_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.118 280325 DEBUG nova.objects.instance [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5a91ac0a-3acf-4d43-b746-cab698d45279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:54:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:54:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:54:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.177 280325 DEBUG nova.storage.rbd_utils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] rbd image 5a91ac0a-3acf-4d43-b746-cab698d45279_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.229 280325 DEBUG nova.storage.rbd_utils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] rbd image 5a91ac0a-3acf-4d43-b746-cab698d45279_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.234 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Acquiring lock "e6a82f7a19627c0fa8077ef81dd8d48276918b03" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.236 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lock "e6a82f7a19627c0fa8077ef81dd8d48276918b03" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.288 280325 DEBUG nova.virt.libvirt.imagebackend [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Image locations are: [{'url': 'rbd://f1fea371-cb69-578d-a3d0-b5c472a84b46/images/261d27f0-ceab-4c46-ab34-fd84b7199453/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f1fea371-cb69-578d-a3d0-b5c472a84b46/images/261d27f0-ceab-4c46-ab34-fd84b7199453/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Feb 23 04:54:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.384 280325 DEBUG nova.virt.libvirt.imagebackend [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Selected location: {'url': 'rbd://f1fea371-cb69-578d-a3d0-b5c472a84b46/images/261d27f0-ceab-4c46-ab34-fd84b7199453/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.385 280325 DEBUG nova.storage.rbd_utils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] cloning images/261d27f0-ceab-4c46-ab34-fd84b7199453@snap to None/5a91ac0a-3acf-4d43-b746-cab698d45279_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.573 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lock "e6a82f7a19627c0fa8077ef81dd8d48276918b03" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:35 localhost dnsmasq[307882]: exiting on receipt of SIGTERM Feb 23 04:54:35 localhost podman[308433]: 2026-02-23 09:54:35.583164653 +0000 UTC m=+0.055327662 container kill 8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6bb5e4e9-2e98-4062-b87c-05bbb2af6730, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:54:35 localhost systemd[1]: libpod-8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e.scope: Deactivated successfully. Feb 23 04:54:35 localhost podman[308447]: 2026-02-23 09:54:35.658578877 +0000 UTC m=+0.065901815 container died 8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6bb5e4e9-2e98-4062-b87c-05bbb2af6730, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:54:35 localhost podman[308447]: 2026-02-23 09:54:35.686681417 +0000 UTC m=+0.094004285 container cleanup 8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6bb5e4e9-2e98-4062-b87c-05bbb2af6730, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 23 04:54:35 localhost systemd[1]: libpod-conmon-8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e.scope: Deactivated successfully. Feb 23 04:54:35 localhost podman[308459]: 2026-02-23 09:54:35.728886278 +0000 UTC m=+0.125566930 container remove 8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6bb5e4e9-2e98-4062-b87c-05bbb2af6730, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.816 280325 DEBUG nova.objects.instance [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lazy-loading 'migration_context' on Instance uuid 5a91ac0a-3acf-4d43-b746-cab698d45279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:35.831 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:35 localhost nova_compute[280321]: 2026-02-23 09:54:35.916 280325 DEBUG nova.storage.rbd_utils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] flattening vms/5a91ac0a-3acf-4d43-b746-cab698d45279_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Feb 23 04:54:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:36.391 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:36 localhost systemd[1]: var-lib-containers-storage-overlay-bd2e9f0c0c85db93f88b73d26579751cf33e73d7ddc2a7c1fc9d1a83977015e9-merged.mount: Deactivated successfully. Feb 23 04:54:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8996922b5f905a9048060caaa773e45aec7c5d974d69f4ba2b050538be15ed0e-userdata-shm.mount: Deactivated successfully. Feb 23 04:54:36 localhost systemd[1]: run-netns-qdhcp\x2d6bb5e4e9\x2d2e98\x2d4062\x2db87c\x2d05bbb2af6730.mount: Deactivated successfully. Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.704 280325 DEBUG nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Image rbd:vms/5a91ac0a-3acf-4d43-b746-cab698d45279_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.706 280325 DEBUG nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.707 280325 DEBUG nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Ensure instance console log exists: /var/lib/nova/instances/5a91ac0a-3acf-4d43-b746-cab698d45279/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.707 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.708 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.708 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.710 280325 DEBUG nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-23T09:54:08Z,direct_url=,disk_format='raw',id=261d27f0-ceab-4c46-ab34-fd84b7199453,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-919112479-shelved',owner='fbd2032d0b7545b2b091cbf2ff5c562d',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2026-02-23T09:54:29Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'd08f8876-d97b-493b-b16b-caf91668eecb'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.714 280325 WARNING nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.717 280325 DEBUG nova.virt.libvirt.host [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Searching host: 'np0005626465.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.717 280325 DEBUG nova.virt.libvirt.host [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.719 280325 DEBUG nova.virt.libvirt.host [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Searching host: 'np0005626465.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.719 280325 DEBUG nova.virt.libvirt.host [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.720 280325 DEBUG nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.720 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-23T09:52:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='dd9292ba-25cb-4da3-92e1-803e436b1b2c',id=6,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-23T09:54:08Z,direct_url=,disk_format='raw',id=261d27f0-ceab-4c46-ab34-fd84b7199453,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-919112479-shelved',owner='fbd2032d0b7545b2b091cbf2ff5c562d',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2026-02-23T09:54:29Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.721 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.721 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.721 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.722 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.722 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.722 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.723 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.723 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.723 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.724 280325 DEBUG nova.virt.hardware [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.724 280325 DEBUG nova.objects.instance [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5a91ac0a-3acf-4d43-b746-cab698d45279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.744 280325 DEBUG oslo_concurrency.processutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:36 localhost nova_compute[280321]: 2026-02-23 09:54:36.780 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 285 MiB data, 958 MiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 223 op/s Feb 23 04:54:37 localhost sshd[308588]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:54:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 23 04:54:37 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1785721779' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.199 280325 DEBUG oslo_concurrency.processutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.233 280325 DEBUG nova.storage.rbd_utils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] rbd image 5a91ac0a-3acf-4d43-b746-cab698d45279_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.238 280325 DEBUG oslo_concurrency.processutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.456 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 23 04:54:37 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1512085890' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.734 280325 DEBUG oslo_concurrency.processutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.737 280325 DEBUG nova.objects.instance [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a91ac0a-3acf-4d43-b746-cab698d45279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.755 280325 DEBUG nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] End _get_guest_xml xml= Feb 23 04:54:37 localhost nova_compute[280321]: 5a91ac0a-3acf-4d43-b746-cab698d45279 Feb 23 04:54:37 localhost nova_compute[280321]: instance-00000007 Feb 23 04:54:37 localhost nova_compute[280321]: 131072 Feb 23 04:54:37 localhost nova_compute[280321]: 1 Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: tempest-UnshelveToHostMultiNodesTest-server-919112479 Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:36 Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: 128 Feb 23 04:54:37 localhost nova_compute[280321]: 1 Feb 23 04:54:37 localhost nova_compute[280321]: 0 Feb 23 04:54:37 localhost nova_compute[280321]: 0 Feb 23 04:54:37 localhost nova_compute[280321]: 1 Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: tempest-UnshelveToHostMultiNodesTest-924764858-project-member Feb 23 04:54:37 localhost nova_compute[280321]: tempest-UnshelveToHostMultiNodesTest-924764858 Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: RDO Feb 23 04:54:37 localhost nova_compute[280321]: OpenStack Compute Feb 23 04:54:37 localhost nova_compute[280321]: 27.5.2-0.20260220085704.5cfeecb.el9 Feb 23 04:54:37 localhost nova_compute[280321]: 5a91ac0a-3acf-4d43-b746-cab698d45279 Feb 23 04:54:37 localhost nova_compute[280321]: 5a91ac0a-3acf-4d43-b746-cab698d45279 Feb 23 04:54:37 localhost nova_compute[280321]: Virtual Machine Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: hvm Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: /dev/urandom Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: Feb 23 04:54:37 localhost nova_compute[280321]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.805 280325 DEBUG nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.805 280325 DEBUG nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.806 280325 INFO nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Using config drive#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.843 280325 DEBUG nova.storage.rbd_utils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] rbd image 5a91ac0a-3acf-4d43-b746-cab698d45279_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.864 280325 DEBUG nova.objects.instance [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5a91ac0a-3acf-4d43-b746-cab698d45279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:37 localhost nova_compute[280321]: 2026-02-23 09:54:37.905 280325 DEBUG nova.objects.instance [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lazy-loading 'keypairs' on Instance uuid 5a91ac0a-3acf-4d43-b746-cab698d45279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.055 280325 INFO nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Creating config drive at /var/lib/nova/instances/5a91ac0a-3acf-4d43-b746-cab698d45279/disk.config#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.058 280325 DEBUG oslo_concurrency.processutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a91ac0a-3acf-4d43-b746-cab698d45279/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgazbqhvc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.184 280325 DEBUG oslo_concurrency.processutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a91ac0a-3acf-4d43-b746-cab698d45279/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgazbqhvc" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.227 280325 DEBUG nova.storage.rbd_utils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] rbd image 5a91ac0a-3acf-4d43-b746-cab698d45279_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.232 280325 DEBUG oslo_concurrency.processutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a91ac0a-3acf-4d43-b746-cab698d45279/disk.config 5a91ac0a-3acf-4d43-b746-cab698d45279_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.425 280325 DEBUG oslo_concurrency.processutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a91ac0a-3acf-4d43-b746-cab698d45279/disk.config 5a91ac0a-3acf-4d43-b746-cab698d45279_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.426 280325 INFO nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Deleting local config drive /var/lib/nova/instances/5a91ac0a-3acf-4d43-b746-cab698d45279/disk.config because it was imported into RBD.#033[00m Feb 23 04:54:38 localhost systemd-machined[205673]: New machine qemu-2-instance-00000007. Feb 23 04:54:38 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000007. Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.763 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.764 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] VM Resumed (Lifecycle Event)#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.767 280325 DEBUG nova.compute.manager [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.767 280325 DEBUG nova.virt.libvirt.driver [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.771 280325 INFO nova.virt.libvirt.driver [-] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Instance spawned successfully.#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.795 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.799 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.823 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.824 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.824 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] VM Started (Lifecycle Event)#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.851 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.856 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 23 04:54:38 localhost nova_compute[280321]: 2026-02-23 09:54:38.875 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 23 04:54:39 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e103 e103: 6 total, 6 up, 6 in Feb 23 04:54:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 285 MiB data, 958 MiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 4.0 MiB/s wr, 185 op/s Feb 23 04:54:39 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:39.180 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:53:47Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=68f77b5f-9ee1-445a-9bc9-8dae82293c2b, ip_allocation=immediate, mac_address=fa:16:3e:eb:c0:be, name=tempest-parent-1619502580, network_id=a5e383fe-b918-4723-9dbc-32201feec87d, port_security_enabled=True, project_id=b5e1135ba2724a69b072bbda0ea8476c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=13, security_groups=['5e2da0ff-f592-42de-9188-06e3b0bca61b'], standard_attr_id=410, status=DOWN, tags=[], tenant_id=b5e1135ba2724a69b072bbda0ea8476c, trunk_details=sub_ports=[], trunk_id=95b61762-73a8-4898-b5d1-96beb9397be7, updated_at=2026-02-23T09:54:38Z on network a5e383fe-b918-4723-9dbc-32201feec87d#033[00m Feb 23 04:54:39 localhost nova_compute[280321]: 2026-02-23 09:54:39.283 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:39 localhost dnsmasq[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/addn_hosts - 2 addresses Feb 23 04:54:39 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/host Feb 23 04:54:39 localhost podman[308766]: 2026-02-23 09:54:39.440180456 +0000 UTC m=+0.071313631 container kill 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2) Feb 23 04:54:39 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/opts Feb 23 04:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:54:39 localhost podman[308779]: 2026-02-23 09:54:39.546097554 +0000 UTC m=+0.084749242 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Feb 23 04:54:39 localhost podman[308779]: 2026-02-23 09:54:39.560236216 +0000 UTC m=+0.098887904 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 04:54:39 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:54:39 localhost nova_compute[280321]: 2026-02-23 09:54:39.582 280325 DEBUG nova.compute.manager [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:54:39 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:39.590 263679 INFO neutron.agent.dhcp.agent [None req-c3982e35-20c3-463d-b9fe-e700146b0240 - - - - - -] DHCP configuration for ports {'68f77b5f-9ee1-445a-9bc9-8dae82293c2b'} is completed#033[00m Feb 23 04:54:39 localhost nova_compute[280321]: 2026-02-23 09:54:39.633 280325 DEBUG oslo_concurrency.lockutils [None req-a917c533-1232-42a7-83e1-f5253d2cb0a4 f28fc158afa746b9a7686bea6b03f5d0 14f55d2687a1495bba60f7a3269e0e82 - - default default] Lock "5a91ac0a-3acf-4d43-b746-cab698d45279" "released" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: held 6.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:39 localhost podman[308778]: 2026-02-23 09:54:39.658718366 +0000 UTC m=+0.197692813 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0) Feb 23 04:54:39 localhost podman[308778]: 2026-02-23 09:54:39.696859473 +0000 UTC m=+0.235833940 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:54:39 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.140 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Acquiring lock "5a91ac0a-3acf-4d43-b746-cab698d45279" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.140 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Lock "5a91ac0a-3acf-4d43-b746-cab698d45279" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.141 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Acquiring lock "5a91ac0a-3acf-4d43-b746-cab698d45279-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.141 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Lock "5a91ac0a-3acf-4d43-b746-cab698d45279-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.142 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Lock "5a91ac0a-3acf-4d43-b746-cab698d45279-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.144 280325 INFO nova.compute.manager [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Terminating instance#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.146 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Acquiring lock "refresh_cache-5a91ac0a-3acf-4d43-b746-cab698d45279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.146 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Acquired lock "refresh_cache-5a91ac0a-3acf-4d43-b746-cab698d45279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.147 280325 DEBUG nova.network.neutron [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.203 280325 DEBUG nova.network.neutron [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.334 280325 DEBUG nova.network.neutron [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.347 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Releasing lock "refresh_cache-5a91ac0a-3acf-4d43-b746-cab698d45279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.348 280325 DEBUG nova.compute.manager [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Feb 23 04:54:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:40 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000007.scope: Deactivated successfully. Feb 23 04:54:40 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000007.scope: Consumed 1.993s CPU time. Feb 23 04:54:40 localhost systemd-machined[205673]: Machine qemu-2-instance-00000007 terminated. Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.568 280325 INFO nova.virt.libvirt.driver [-] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Instance destroyed successfully.#033[00m Feb 23 04:54:40 localhost nova_compute[280321]: 2026-02-23 09:54:40.568 280325 DEBUG nova.objects.instance [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Lazy-loading 'resources' on Instance uuid 5a91ac0a-3acf-4d43-b746-cab698d45279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:54:41 localhost neutron_sriov_agent[256355]: 2026-02-23 09:54:41.076 2 INFO neutron.agent.securitygroups_rpc [None req-ff2971e4-9240-4c6a-b550-c220da174a71 fb712af440b2428b8717631185f9fc4e 2fbe870428324feda18014285ef9eb40 - - default default] Security group member updated ['0982a37e-b389-40e3-834f-dcc14e42d01c']#033[00m Feb 23 04:54:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 230 MiB data, 892 MiB used, 41 GiB / 42 GiB avail; 6.4 MiB/s rd, 5.1 MiB/s wr, 197 op/s Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.151 280325 INFO nova.virt.libvirt.driver [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Deleting instance files /var/lib/nova/instances/5a91ac0a-3acf-4d43-b746-cab698d45279_del#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.152 280325 INFO nova.virt.libvirt.driver [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Deletion of /var/lib/nova/instances/5a91ac0a-3acf-4d43-b746-cab698d45279_del complete#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.231 280325 DEBUG nova.virt.libvirt.host [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.231 280325 INFO nova.virt.libvirt.host [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] UEFI support detected#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.234 280325 INFO nova.compute.manager [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.235 280325 DEBUG oslo.service.loopingcall [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.235 280325 DEBUG nova.compute.manager [-] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.236 280325 DEBUG nova.network.neutron [-] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.287 280325 DEBUG nova.network.neutron [-] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.311 280325 DEBUG nova.network.neutron [-] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.324 280325 INFO nova.compute.manager [-] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Took 0.09 seconds to deallocate network for instance.#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.364 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.364 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:41 localhost neutron_sriov_agent[256355]: 2026-02-23 09:54:41.367 2 INFO neutron.agent.securitygroups_rpc [None req-ff2971e4-9240-4c6a-b550-c220da174a71 fb712af440b2428b8717631185f9fc4e 2fbe870428324feda18014285ef9eb40 - - default default] Security group member updated ['0982a37e-b389-40e3-834f-dcc14e42d01c']#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.433 280325 DEBUG oslo_concurrency.processutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:54:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:54:41 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1822049600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.884 280325 DEBUG oslo_concurrency.processutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.890 280325 DEBUG nova.compute.provider_tree [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.907 280325 DEBUG nova.scheduler.client.report [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:54:41 localhost nova_compute[280321]: 2026-02-23 09:54:41.932 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:42 localhost nova_compute[280321]: 2026-02-23 09:54:42.002 280325 INFO nova.scheduler.client.report [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Deleted allocations for instance 5a91ac0a-3acf-4d43-b746-cab698d45279#033[00m Feb 23 04:54:42 localhost nova_compute[280321]: 2026-02-23 09:54:42.091 280325 DEBUG oslo_concurrency.lockutils [None req-cca90b93-cdd3-4a16-9b6a-f0b064eaf2ab 1a1005f6de264a15adb458e871a43e85 fbd2032d0b7545b2b091cbf2ff5c562d - - default default] Lock "5a91ac0a-3acf-4d43-b746-cab698d45279" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 1.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:42 localhost neutron_sriov_agent[256355]: 2026-02-23 09:54:42.131 2 INFO neutron.agent.securitygroups_rpc [None req-2742a8e1-f348-43b4-9fb2-dd2072837cd2 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Security group member updated ['5e2da0ff-f592-42de-9188-06e3b0bca61b']#033[00m Feb 23 04:54:42 localhost neutron_sriov_agent[256355]: 2026-02-23 09:54:42.230 2 INFO neutron.agent.securitygroups_rpc [None req-e6e1d23b-7f3f-4c9f-825c-f305c0d37186 fb712af440b2428b8717631185f9fc4e 2fbe870428324feda18014285ef9eb40 - - default default] Security group member updated ['0982a37e-b389-40e3-834f-dcc14e42d01c']#033[00m Feb 23 04:54:42 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:42.253 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:42 localhost nova_compute[280321]: 2026-02-23 09:54:42.458 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:42 localhost neutron_sriov_agent[256355]: 2026-02-23 09:54:42.481 2 INFO neutron.agent.securitygroups_rpc [None req-ca0adc89-f8c7-402b-bd8c-8b5b257e86b1 fb712af440b2428b8717631185f9fc4e 2fbe870428324feda18014285ef9eb40 - - default default] Security group member updated ['0982a37e-b389-40e3-834f-dcc14e42d01c']#033[00m Feb 23 04:54:42 localhost podman[241086]: time="2026-02-23T09:54:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:54:42 localhost podman[241086]: @ - - [23/Feb/2026:09:54:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1" Feb 23 04:54:42 localhost podman[241086]: @ - - [23/Feb/2026:09:54:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18290 "" "Go-http-client/1.1" Feb 23 04:54:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 6.9 MiB/s rd, 4.7 MiB/s wr, 265 op/s Feb 23 04:54:43 localhost nova_compute[280321]: 2026-02-23 09:54:43.613 280325 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:54:43 localhost nova_compute[280321]: 2026-02-23 09:54:43.613 280325 INFO nova.compute.manager [-] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] VM Stopped (Lifecycle Event)#033[00m Feb 23 04:54:43 localhost nova_compute[280321]: 2026-02-23 09:54:43.637 280325 DEBUG nova.compute.manager [None req-6de6fd3e-717c-4e7f-b1a0-6ad13655492d - - - - - -] [instance: 78070789-b766-4674-b4e1-8040cbf7346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:54:44 localhost nova_compute[280321]: 2026-02-23 09:54:44.285 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:44 localhost neutron_sriov_agent[256355]: 2026-02-23 09:54:44.637 2 INFO neutron.agent.securitygroups_rpc [None req-c06d8773-2479-4f88-85a7-f04d29c76a1d 0b7edff084ac4cda88d2d8f5182da779 b5e1135ba2724a69b072bbda0ea8476c - - default default] Security group member updated ['5e2da0ff-f592-42de-9188-06e3b0bca61b']#033[00m Feb 23 04:54:44 localhost dnsmasq[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/addn_hosts - 1 addresses Feb 23 04:54:44 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/host Feb 23 04:54:44 localhost podman[308881]: 2026-02-23 09:54:44.853209337 +0000 UTC m=+0.048291418 container kill 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:54:44 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/opts Feb 23 04:54:44 localhost systemd[1]: tmp-crun.W3mepf.mount: Deactivated successfully. Feb 23 04:54:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:54:44 localhost podman[308895]: 2026-02-23 09:54:44.973104962 +0000 UTC m=+0.093751427 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:54:45 localhost podman[308895]: 2026-02-23 09:54:45.015853489 +0000 UTC m=+0.136499924 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:54:45 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:54:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 6.9 MiB/s rd, 4.7 MiB/s wr, 264 op/s Feb 23 04:54:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e104 e104: 6 total, 6 up, 6 in Feb 23 04:54:45 localhost ovn_controller[155966]: 2026-02-23T09:54:45Z|00066|ovn_bfd|INFO|Disabled BFD on interface ovn-5b0126-0 Feb 23 04:54:45 localhost ovn_controller[155966]: 2026-02-23T09:54:45Z|00067|ovn_bfd|INFO|Disabled BFD on interface ovn-585d62-0 Feb 23 04:54:45 localhost ovn_controller[155966]: 2026-02-23T09:54:45Z|00068|ovn_bfd|INFO|Disabled BFD on interface ovn-b9c72d-0 Feb 23 04:54:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:45 localhost nova_compute[280321]: 2026-02-23 09:54:45.368 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:45 localhost nova_compute[280321]: 2026-02-23 09:54:45.390 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:45 localhost nova_compute[280321]: 2026-02-23 09:54:45.395 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:45 localhost dnsmasq[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/addn_hosts - 0 addresses Feb 23 04:54:45 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/host Feb 23 04:54:45 localhost dnsmasq-dhcp[306231]: read /var/lib/neutron/dhcp/a5e383fe-b918-4723-9dbc-32201feec87d/opts Feb 23 04:54:45 localhost podman[308940]: 2026-02-23 09:54:45.489497298 +0000 UTC m=+0.065324949 container kill 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 04:54:45 localhost nova_compute[280321]: 2026-02-23 09:54:45.658 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:45 localhost kernel: device tapd092b295-de left promiscuous mode Feb 23 04:54:45 localhost ovn_controller[155966]: 2026-02-23T09:54:45Z|00069|binding|INFO|Releasing lport d092b295-de90-4b00-8eb2-21e2ea4d9d0b from this chassis (sb_readonly=0) Feb 23 04:54:45 localhost ovn_controller[155966]: 2026-02-23T09:54:45Z|00070|binding|INFO|Setting lport d092b295-de90-4b00-8eb2-21e2ea4d9d0b down in Southbound Feb 23 04:54:45 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:45.668 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-a5e383fe-b918-4723-9dbc-32201feec87d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5e383fe-b918-4723-9dbc-32201feec87d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5e1135ba2724a69b072bbda0ea8476c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a4ae72f-8c09-4559-aec5-36314af9e25d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d092b295-de90-4b00-8eb2-21e2ea4d9d0b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:45 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:45.670 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d092b295-de90-4b00-8eb2-21e2ea4d9d0b in datapath a5e383fe-b918-4723-9dbc-32201feec87d unbound from our chassis#033[00m Feb 23 04:54:45 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:45.674 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5e383fe-b918-4723-9dbc-32201feec87d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:54:45 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:45.675 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[bf344c75-41d8-408b-af34-c63eaefe7447]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:45 localhost nova_compute[280321]: 2026-02-23 09:54:45.682 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:46 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e105 e105: 6 total, 6 up, 6 in Feb 23 04:54:46 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e106 e106: 6 total, 6 up, 6 in Feb 23 04:54:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 145 MiB data, 738 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 9.8 KiB/s wr, 193 op/s Feb 23 04:54:47 localhost nova_compute[280321]: 2026-02-23 09:54:47.461 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:47.550 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:47.550 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:54:47 localhost nova_compute[280321]: 2026-02-23 09:54:47.585 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:48.312 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:54:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:48.313 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:54:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:48.313 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:54:48 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:48.412 263679 INFO neutron.agent.linux.ip_lib [None req-61ca8ba6-cebb-4309-bb92-e6dfd3804b55 - - - - - -] Device tap831078ba-c7 cannot be used as it has no MAC address#033[00m Feb 23 04:54:48 localhost nova_compute[280321]: 2026-02-23 09:54:48.431 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:48 localhost kernel: device tap831078ba-c7 entered promiscuous mode Feb 23 04:54:48 localhost NetworkManager[5987]: [1771840488.4377] manager: (tap831078ba-c7): new Generic device (/org/freedesktop/NetworkManager/Devices/21) Feb 23 04:54:48 localhost nova_compute[280321]: 2026-02-23 09:54:48.439 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:48 localhost ovn_controller[155966]: 2026-02-23T09:54:48Z|00071|binding|INFO|Claiming lport 831078ba-c7a3-4780-ab01-529a82608622 for this chassis. Feb 23 04:54:48 localhost ovn_controller[155966]: 2026-02-23T09:54:48Z|00072|binding|INFO|831078ba-c7a3-4780-ab01-529a82608622: Claiming unknown Feb 23 04:54:48 localhost systemd-udevd[308972]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:54:48 localhost journal[229268]: ethtool ioctl error on tap831078ba-c7: No such device Feb 23 04:54:48 localhost journal[229268]: ethtool ioctl error on tap831078ba-c7: No such device Feb 23 04:54:48 localhost journal[229268]: ethtool ioctl error on tap831078ba-c7: No such device Feb 23 04:54:48 localhost journal[229268]: ethtool ioctl error on tap831078ba-c7: No such device Feb 23 04:54:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:48.472 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::1/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-2719b48e-ce39-480d-8a9c-b8bcb00e267e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2719b48e-ce39-480d-8a9c-b8bcb00e267e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fbe870428324feda18014285ef9eb40', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2ceb890-a631-47ca-88fd-2190cf07028a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=831078ba-c7a3-4780-ab01-529a82608622) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:48.473 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 831078ba-c7a3-4780-ab01-529a82608622 in datapath 2719b48e-ce39-480d-8a9c-b8bcb00e267e bound to our chassis#033[00m Feb 23 04:54:48 localhost ovn_controller[155966]: 2026-02-23T09:54:48Z|00073|binding|INFO|Setting lport 831078ba-c7a3-4780-ab01-529a82608622 ovn-installed in OVS Feb 23 04:54:48 localhost ovn_controller[155966]: 2026-02-23T09:54:48Z|00074|binding|INFO|Setting lport 831078ba-c7a3-4780-ab01-529a82608622 up in Southbound Feb 23 04:54:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:48.474 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2719b48e-ce39-480d-8a9c-b8bcb00e267e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:54:48 localhost nova_compute[280321]: 2026-02-23 09:54:48.475 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:48.476 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[c291c048-2704-4c3d-b9ef-00d4e7d0a9a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:48 localhost journal[229268]: ethtool ioctl error on tap831078ba-c7: No such device Feb 23 04:54:48 localhost journal[229268]: ethtool ioctl error on tap831078ba-c7: No such device Feb 23 04:54:48 localhost journal[229268]: ethtool ioctl error on tap831078ba-c7: No such device Feb 23 04:54:48 localhost journal[229268]: ethtool ioctl error on tap831078ba-c7: No such device Feb 23 04:54:48 localhost nova_compute[280321]: 2026-02-23 09:54:48.494 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:48 localhost nova_compute[280321]: 2026-02-23 09:54:48.514 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:48 localhost sshd[309016]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:54:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 177 active+clean; 145 MiB data, 738 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 5.3 KiB/s wr, 55 op/s Feb 23 04:54:49 localhost podman[309045]: Feb 23 04:54:49 localhost podman[309045]: 2026-02-23 09:54:49.269685342 +0000 UTC m=+0.081079429 container create 9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2719b48e-ce39-480d-8a9c-b8bcb00e267e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 23 04:54:49 localhost nova_compute[280321]: 2026-02-23 09:54:49.286 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:49 localhost systemd[1]: Started libpod-conmon-9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178.scope. Feb 23 04:54:49 localhost systemd[1]: tmp-crun.0lWPHF.mount: Deactivated successfully. Feb 23 04:54:49 localhost podman[309045]: 2026-02-23 09:54:49.2339524 +0000 UTC m=+0.045346527 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:54:49 localhost systemd[1]: Started libcrun container. Feb 23 04:54:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d1cf6664a916c62cfe3070451d2c3b6bdf82df745a1fe98fc8fbf516048245/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:54:49 localhost podman[309045]: 2026-02-23 09:54:49.356535357 +0000 UTC m=+0.167929464 container init 9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2719b48e-ce39-480d-8a9c-b8bcb00e267e, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:54:49 localhost podman[309045]: 2026-02-23 09:54:49.365735668 +0000 UTC m=+0.177129785 container start 9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2719b48e-ce39-480d-8a9c-b8bcb00e267e, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:54:49 localhost dnsmasq[309061]: started, version 2.85 cachesize 150 Feb 23 04:54:49 localhost dnsmasq[309061]: DNS service limited to local subnets Feb 23 04:54:49 localhost dnsmasq[309061]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:54:49 localhost dnsmasq[309061]: warning: no upstream servers configured Feb 23 04:54:49 localhost dnsmasq-dhcp[309061]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Feb 23 04:54:49 localhost dnsmasq[309061]: read /var/lib/neutron/dhcp/2719b48e-ce39-480d-8a9c-b8bcb00e267e/addn_hosts - 0 addresses Feb 23 04:54:49 localhost dnsmasq-dhcp[309061]: read /var/lib/neutron/dhcp/2719b48e-ce39-480d-8a9c-b8bcb00e267e/host Feb 23 04:54:49 localhost dnsmasq-dhcp[309061]: read /var/lib/neutron/dhcp/2719b48e-ce39-480d-8a9c-b8bcb00e267e/opts Feb 23 04:54:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:49.550 263679 INFO neutron.agent.dhcp.agent [None req-fa1a82c2-af55-4d68-afc3-91df95131b44 - - - - - -] DHCP configuration for ports {'00bde243-f68b-406c-b12d-4f83dc826691'} is completed#033[00m Feb 23 04:54:49 localhost podman[309078]: 2026-02-23 09:54:49.738652339 +0000 UTC m=+0.058258842 container kill 9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2719b48e-ce39-480d-8a9c-b8bcb00e267e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:54:49 localhost dnsmasq[309061]: exiting on receipt of SIGTERM Feb 23 04:54:49 localhost systemd[1]: libpod-9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178.scope: Deactivated successfully. Feb 23 04:54:49 localhost podman[309092]: 2026-02-23 09:54:49.810889956 +0000 UTC m=+0.057371764 container died 9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2719b48e-ce39-480d-8a9c-b8bcb00e267e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:54:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:49.826 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 31cf5b97-e348-4a02-acff-ca46dd6c7b44 with type ""#033[00m Feb 23 04:54:49 localhost ovn_controller[155966]: 2026-02-23T09:54:49Z|00075|binding|INFO|Removing iface tap831078ba-c7 ovn-installed in OVS Feb 23 04:54:49 localhost ovn_controller[155966]: 2026-02-23T09:54:49Z|00076|binding|INFO|Removing lport 831078ba-c7a3-4780-ab01-529a82608622 ovn-installed in OVS Feb 23 04:54:49 localhost nova_compute[280321]: 2026-02-23 09:54:49.828 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:49.829 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-2719b48e-ce39-480d-8a9c-b8bcb00e267e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2719b48e-ce39-480d-8a9c-b8bcb00e267e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fbe870428324feda18014285ef9eb40', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2ceb890-a631-47ca-88fd-2190cf07028a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=831078ba-c7a3-4780-ab01-529a82608622) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:54:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:49.832 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 831078ba-c7a3-4780-ab01-529a82608622 in datapath 2719b48e-ce39-480d-8a9c-b8bcb00e267e unbound from our chassis#033[00m Feb 23 04:54:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:49.834 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2719b48e-ce39-480d-8a9c-b8bcb00e267e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:54:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:49.835 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a927f3-f14b-4559-be48-888d42edcf91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:54:49 localhost nova_compute[280321]: 2026-02-23 09:54:49.837 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:49 localhost podman[309092]: 2026-02-23 09:54:49.849217788 +0000 UTC m=+0.095699546 container cleanup 9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2719b48e-ce39-480d-8a9c-b8bcb00e267e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:54:49 localhost systemd[1]: libpod-conmon-9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178.scope: Deactivated successfully. Feb 23 04:54:49 localhost podman[309094]: 2026-02-23 09:54:49.891770539 +0000 UTC m=+0.132880453 container remove 9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2719b48e-ce39-480d-8a9c-b8bcb00e267e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 04:54:49 localhost nova_compute[280321]: 2026-02-23 09:54:49.902 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:49 localhost kernel: device tap831078ba-c7 left promiscuous mode Feb 23 04:54:49 localhost nova_compute[280321]: 2026-02-23 09:54:49.913 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:49.925 263679 INFO neutron.agent.dhcp.agent [None req-fe5eba4a-8711-4b27-8d0c-23a708e2e68a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:50 localhost dnsmasq[306231]: exiting on receipt of SIGTERM Feb 23 04:54:50 localhost podman[309138]: 2026-02-23 09:54:50.026961271 +0000 UTC m=+0.055191747 container kill 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:54:50 localhost systemd[1]: libpod-3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a.scope: Deactivated successfully. Feb 23 04:54:50 localhost podman[309152]: 2026-02-23 09:54:50.094534738 +0000 UTC m=+0.056527000 container died 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216) Feb 23 04:54:50 localhost podman[309152]: 2026-02-23 09:54:50.121939044 +0000 UTC m=+0.083931276 container cleanup 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 23 04:54:50 localhost systemd[1]: libpod-conmon-3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a.scope: Deactivated successfully. Feb 23 04:54:50 localhost podman[309157]: 2026-02-23 09:54:50.173353567 +0000 UTC m=+0.125002603 container remove 3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5e383fe-b918-4723-9dbc-32201feec87d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216) Feb 23 04:54:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:50.202 263679 INFO neutron.agent.dhcp.agent [None req-f555e165-846e-4db8-a989-990842d18de1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:50.203 263679 INFO neutron.agent.dhcp.agent [None req-f555e165-846e-4db8-a989-990842d18de1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:50 localhost nova_compute[280321]: 2026-02-23 09:54:50.313 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:50 localhost systemd[1]: var-lib-containers-storage-overlay-e0d1cf6664a916c62cfe3070451d2c3b6bdf82df745a1fe98fc8fbf516048245-merged.mount: Deactivated successfully. Feb 23 04:54:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9dbd6174ba7454df174f40dea8514a6989fe5c917437c0dd3ce447db942e3178-userdata-shm.mount: Deactivated successfully. Feb 23 04:54:50 localhost systemd[1]: run-netns-qdhcp\x2d2719b48e\x2dce39\x2d480d\x2d8a9c\x2db8bcb00e267e.mount: Deactivated successfully. Feb 23 04:54:50 localhost systemd[1]: var-lib-containers-storage-overlay-535624e1c3195fb2fd0ed1f6149adbaacb68a65ad18280cc256e2fa9acdfb11f-merged.mount: Deactivated successfully. Feb 23 04:54:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e0f60f786602ca8c317da99844fd49c902c91f4ffc2dd8945506f253797b61a-userdata-shm.mount: Deactivated successfully. Feb 23 04:54:50 localhost systemd[1]: run-netns-qdhcp\x2da5e383fe\x2db918\x2d4723\x2d9dbc\x2d32201feec87d.mount: Deactivated successfully. Feb 23 04:54:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:50.399 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 5.5 KiB/s wr, 73 op/s Feb 23 04:54:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e107 e107: 6 total, 6 up, 6 in Feb 23 04:54:52 localhost nova_compute[280321]: 2026-02-23 09:54:52.462 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 2.8 KiB/s wr, 30 op/s Feb 23 04:54:53 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:53.422 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:54 localhost nova_compute[280321]: 2026-02-23 09:54:54.288 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:54 localhost ovn_metadata_agent[161837]: 2026-02-23 09:54:54.553 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:54:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 124 B/s wr, 12 op/s Feb 23 04:54:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:54:55 localhost nova_compute[280321]: 2026-02-23 09:54:55.566 280325 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:54:55 localhost nova_compute[280321]: 2026-02-23 09:54:55.567 280325 INFO nova.compute.manager [-] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] VM Stopped (Lifecycle Event)#033[00m Feb 23 04:54:55 localhost nova_compute[280321]: 2026-02-23 09:54:55.592 280325 DEBUG nova.compute.manager [None req-b9a16205-447a-4dab-a97b-de72ec91fab8 - - - - - -] [instance: 5a91ac0a-3acf-4d43-b746-cab698d45279] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.090 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:54:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:54:56 localhost neutron_sriov_agent[256355]: 2026-02-23 09:54:56.249 2 INFO neutron.agent.securitygroups_rpc [None req-e4e33ab4-a498-48c3-b62b-d466306b0b2c 8c67fb6133284335807155391776f7a4 35e3e6665f014caf91b19ef9e685a75a - - default default] Security group member updated ['18ad37ac-3bf6-435c-949b-384a2e1dc20f']#033[00m Feb 23 04:54:56 localhost neutron_sriov_agent[256355]: 2026-02-23 09:54:56.732 2 INFO neutron.agent.securitygroups_rpc [None req-5f241d63-4aea-41aa-b22a-dca2e6055a49 8c67fb6133284335807155391776f7a4 35e3e6665f014caf91b19ef9e685a75a - - default default] Security group member updated ['18ad37ac-3bf6-435c-949b-384a2e1dc20f']#033[00m Feb 23 04:54:56 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:54:56.760 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:54:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 8.7 KiB/s rd, 102 B/s wr, 10 op/s Feb 23 04:54:57 localhost nova_compute[280321]: 2026-02-23 09:54:57.464 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:54:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:54:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:54:58 localhost podman[309183]: 2026-02-23 09:54:58.04299678 +0000 UTC m=+0.086749833 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:54:58 localhost podman[309183]: 2026-02-23 09:54:58.077390511 +0000 UTC m=+0.121143614 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:54:58 localhost systemd[1]: tmp-crun.0RAsF8.mount: Deactivated successfully. Feb 23 04:54:58 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:54:58 localhost podman[309184]: 2026-02-23 09:54:58.096575218 +0000 UTC m=+0.136281238 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9/ubi-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 23 04:54:58 localhost podman[309184]: 2026-02-23 09:54:58.106536323 +0000 UTC m=+0.146242373 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, config_id=openstack_network_exporter, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9) Feb 23 04:54:58 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:54:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 8.7 KiB/s rd, 102 B/s wr, 10 op/s Feb 23 04:54:59 localhost nova_compute[280321]: 2026-02-23 09:54:59.290 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:00.507 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail Feb 23 04:55:01 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:01.595 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:55:01 localhost openstack_network_exporter[243519]: ERROR 09:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:55:01 localhost openstack_network_exporter[243519]: Feb 23 04:55:01 localhost openstack_network_exporter[243519]: ERROR 09:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:55:01 localhost openstack_network_exporter[243519]: Feb 23 04:55:02 localhost systemd[1]: tmp-crun.OZozWy.mount: Deactivated successfully. Feb 23 04:55:02 localhost podman[309225]: 2026-02-23 09:55:02.031275706 +0000 UTC m=+0.107000643 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:55:02 localhost podman[309225]: 2026-02-23 09:55:02.063788049 +0000 UTC m=+0.139512996 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260216) Feb 23 04:55:02 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:02.076 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:02 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:55:02 localhost nova_compute[280321]: 2026-02-23 09:55:02.466 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail Feb 23 04:55:03 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:03.978 2 INFO neutron.agent.securitygroups_rpc [None req-d42fa55b-3158-4a2c-8987-32ad0eaad7e9 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Security group rule updated ['d03de417-eb2e-47e8-ad59-eae56add5dd4']#033[00m Feb 23 04:55:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:04.008 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:04 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:04.345 2 INFO neutron.agent.securitygroups_rpc [None req-e1f7edff-cd43-4ce1-b8ac-263a3462cef1 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Security group rule updated ['d03de417-eb2e-47e8-ad59-eae56add5dd4']#033[00m Feb 23 04:55:04 localhost nova_compute[280321]: 2026-02-23 09:55:04.624 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:55:05 Feb 23 04:55:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:55:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 04:55:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['vms', 'manila_data', 'backups', 'volumes', 'manila_metadata', 'images', '.mgr'] Feb 23 04:55:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 04:55:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail Feb 23 04:55:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:55:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:55:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:55:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:55:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Feb 23 04:55:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:55:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:55:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:55:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:55:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:55:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:55:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:55:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:55:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:55:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:55:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:55:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:55:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:05.336 263679 INFO neutron.agent.linux.ip_lib [None req-aac0aef0-80ee-4776-a254-970cefaed8f9 - - - - - -] Device tap8c0a697f-22 cannot be used as it has no MAC address#033[00m Feb 23 04:55:05 localhost nova_compute[280321]: 2026-02-23 09:55:05.353 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:05 localhost kernel: device tap8c0a697f-22 entered promiscuous mode Feb 23 04:55:05 localhost NetworkManager[5987]: [1771840505.3589] manager: (tap8c0a697f-22): new Generic device (/org/freedesktop/NetworkManager/Devices/22) Feb 23 04:55:05 localhost ovn_controller[155966]: 2026-02-23T09:55:05Z|00077|binding|INFO|Claiming lport 8c0a697f-2286-435e-8ce7-d3f6218fc156 for this chassis. Feb 23 04:55:05 localhost ovn_controller[155966]: 2026-02-23T09:55:05Z|00078|binding|INFO|8c0a697f-2286-435e-8ce7-d3f6218fc156: Claiming unknown Feb 23 04:55:05 localhost nova_compute[280321]: 2026-02-23 09:55:05.361 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:05 localhost systemd-udevd[309261]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:55:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:05.372 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ac6a6009ea84eb99f60bd242e459002', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82ecfbd-c671-4216-ac11-086490c80ba6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8c0a697f-2286-435e-8ce7-d3f6218fc156) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:05.374 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 8c0a697f-2286-435e-8ce7-d3f6218fc156 in datapath 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c bound to our chassis#033[00m Feb 23 04:55:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:05.375 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:05.376 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[e3829ba4-7868-42c1-b976-9a6d53a7942e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:05 localhost journal[229268]: ethtool ioctl error on tap8c0a697f-22: No such device Feb 23 04:55:05 localhost journal[229268]: ethtool ioctl error on tap8c0a697f-22: No such device Feb 23 04:55:05 localhost ovn_controller[155966]: 2026-02-23T09:55:05Z|00079|binding|INFO|Setting lport 8c0a697f-2286-435e-8ce7-d3f6218fc156 ovn-installed in OVS Feb 23 04:55:05 localhost ovn_controller[155966]: 2026-02-23T09:55:05Z|00080|binding|INFO|Setting lport 8c0a697f-2286-435e-8ce7-d3f6218fc156 up in Southbound Feb 23 04:55:05 localhost nova_compute[280321]: 2026-02-23 09:55:05.395 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:05 localhost nova_compute[280321]: 2026-02-23 09:55:05.397 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:05 localhost journal[229268]: ethtool ioctl error on tap8c0a697f-22: No such device Feb 23 04:55:05 localhost journal[229268]: ethtool ioctl error on tap8c0a697f-22: No such device Feb 23 04:55:05 localhost journal[229268]: ethtool ioctl error on tap8c0a697f-22: No such device Feb 23 04:55:05 localhost journal[229268]: ethtool ioctl error on tap8c0a697f-22: No such device Feb 23 04:55:05 localhost journal[229268]: ethtool ioctl error on tap8c0a697f-22: No such device Feb 23 04:55:05 localhost nova_compute[280321]: 2026-02-23 09:55:05.417 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:05 localhost journal[229268]: ethtool ioctl error on tap8c0a697f-22: No such device Feb 23 04:55:05 localhost nova_compute[280321]: 2026-02-23 09:55:05.442 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:06 localhost podman[309332]: Feb 23 04:55:06 localhost podman[309332]: 2026-02-23 09:55:06.193831707 +0000 UTC m=+0.094091896 container create d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:55:06 localhost systemd[1]: Started libpod-conmon-d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e.scope. Feb 23 04:55:06 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:06.237 2 INFO neutron.agent.securitygroups_rpc [None req-ba0f42b0-8ba1-470a-9030-cc088e382d98 dcce68a4e6d440099c8b52030a278ab7 1349075215be49eda0b375e59aa77e22 - - default default] Security group member updated ['1a30abeb-10f2-4401-bae3-62a7c905b8e3']#033[00m Feb 23 04:55:06 localhost systemd[1]: tmp-crun.3cFtxH.mount: Deactivated successfully. Feb 23 04:55:06 localhost podman[309332]: 2026-02-23 09:55:06.148328106 +0000 UTC m=+0.048588295 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:55:06 localhost systemd[1]: Started libcrun container. Feb 23 04:55:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b291ffb6bf0f9d41fbe4e0dd045821d5154d8cf2dbc3a8ea13d60b93da0259d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:06 localhost podman[309332]: 2026-02-23 09:55:06.275157813 +0000 UTC m=+0.175418372 container init d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:55:06 localhost podman[309332]: 2026-02-23 09:55:06.286587493 +0000 UTC m=+0.186847692 container start d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 04:55:06 localhost systemd[1]: tmp-crun.1TX6dI.mount: Deactivated successfully. Feb 23 04:55:06 localhost dnsmasq[309350]: started, version 2.85 cachesize 150 Feb 23 04:55:06 localhost dnsmasq[309350]: DNS service limited to local subnets Feb 23 04:55:06 localhost dnsmasq[309350]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:55:06 localhost dnsmasq[309350]: warning: no upstream servers configured Feb 23 04:55:06 localhost dnsmasq-dhcp[309350]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:55:06 localhost dnsmasq[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/addn_hosts - 0 addresses Feb 23 04:55:06 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/host Feb 23 04:55:06 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/opts Feb 23 04:55:06 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:06.384 2 INFO neutron.agent.securitygroups_rpc [None req-ba0f42b0-8ba1-470a-9030-cc088e382d98 dcce68a4e6d440099c8b52030a278ab7 1349075215be49eda0b375e59aa77e22 - - default default] Security group member updated ['1a30abeb-10f2-4401-bae3-62a7c905b8e3']#033[00m Feb 23 04:55:06 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:06.430 263679 INFO neutron.agent.dhcp.agent [None req-1923a560-bdc2-4eb5-8b93-879bd4e61bf6 - - - - - -] DHCP configuration for ports {'b411e7e0-662a-45b2-a6f4-7c955d5b0de7'} is completed#033[00m Feb 23 04:55:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail Feb 23 04:55:07 localhost nova_compute[280321]: 2026-02-23 09:55:07.467 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:07 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:07.516 2 INFO neutron.agent.securitygroups_rpc [None req-d0d1b9f7-4125-4c2c-a5f0-a4480a3b4692 dcce68a4e6d440099c8b52030a278ab7 1349075215be49eda0b375e59aa77e22 - - default default] Security group member updated ['1a30abeb-10f2-4401-bae3-62a7c905b8e3']#033[00m Feb 23 04:55:07 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:07.877 2 INFO neutron.agent.securitygroups_rpc [None req-295aa794-6290-4e56-bed7-db18bb8fb456 dcce68a4e6d440099c8b52030a278ab7 1349075215be49eda0b375e59aa77e22 - - default default] Security group member updated ['1a30abeb-10f2-4401-bae3-62a7c905b8e3']#033[00m Feb 23 04:55:08 localhost nova_compute[280321]: 2026-02-23 09:55:08.325 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Acquiring lock "85a9c2c0-3a8d-44ce-954f-e106841e2068" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:08 localhost nova_compute[280321]: 2026-02-23 09:55:08.325 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:08 localhost nova_compute[280321]: 2026-02-23 09:55:08.344 280325 DEBUG nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Feb 23 04:55:08 localhost nova_compute[280321]: 2026-02-23 09:55:08.443 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:08 localhost nova_compute[280321]: 2026-02-23 09:55:08.444 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:08 localhost nova_compute[280321]: 2026-02-23 09:55:08.449 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Feb 23 04:55:08 localhost nova_compute[280321]: 2026-02-23 09:55:08.450 280325 INFO nova.compute.claims [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Claim successful on node np0005626465.localdomain#033[00m Feb 23 04:55:08 localhost nova_compute[280321]: 2026-02-23 09:55:08.588 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:55:09 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:55:09 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3228397301' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.016 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.023 280325 DEBUG nova.compute.provider_tree [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.040 280325 DEBUG nova.scheduler.client.report [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.068 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.069 280325 DEBUG nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Feb 23 04:55:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.120 280325 DEBUG nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.121 280325 DEBUG nova.network.neutron [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.135 280325 INFO nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.156 280325 DEBUG nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.262 280325 DEBUG nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.263 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.264 280325 INFO nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Creating image(s)#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.300 280325 DEBUG nova.storage.rbd_utils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] rbd image 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.338 280325 DEBUG nova.storage.rbd_utils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] rbd image 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.376 280325 DEBUG nova.storage.rbd_utils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] rbd image 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.380 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.457 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.458 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Acquiring lock "be7ecb9fde249dcbd37d38278f2f533f45a26c75" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.459 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "be7ecb9fde249dcbd37d38278f2f533f45a26c75" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.459 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "be7ecb9fde249dcbd37d38278f2f533f45a26c75" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.497 280325 DEBUG nova.storage.rbd_utils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] rbd image 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.505 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.524 280325 DEBUG nova.policy [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2b38675f57640819bf191ad8152e7cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f67087411544c55a9225236eb297b90', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Feb 23 04:55:09 localhost nova_compute[280321]: 2026-02-23 09:55:09.670 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:55:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:55:09 localhost podman[309468]: 2026-02-23 09:55:09.994049335 +0000 UTC m=+0.066721711 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:55:10 localhost podman[309468]: 2026-02-23 09:55:10.030647913 +0000 UTC m=+0.103320269 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS) Feb 23 04:55:10 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:55:10 localhost podman[309467]: 2026-02-23 09:55:10.049519731 +0000 UTC m=+0.123181297 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 23 04:55:10 localhost nova_compute[280321]: 2026-02-23 09:55:10.071 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/be7ecb9fde249dcbd37d38278f2f533f45a26c75 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:55:10 localhost podman[309467]: 2026-02-23 09:55:10.085856601 +0000 UTC m=+0.159518167 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 23 04:55:10 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:55:10 localhost nova_compute[280321]: 2026-02-23 09:55:10.152 280325 DEBUG nova.storage.rbd_utils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] resizing rbd image 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Feb 23 04:55:10 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:10.290 2 INFO neutron.agent.securitygroups_rpc [req-54148787-22c3-403e-9c83-5d532e459a95 req-dcd0fbb9-2ed1-469d-8be3-13ca7cbeb9c8 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Security group member updated ['d03de417-eb2e-47e8-ad59-eae56add5dd4']#033[00m Feb 23 04:55:10 localhost nova_compute[280321]: 2026-02-23 09:55:10.300 280325 DEBUG nova.objects.instance [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lazy-loading 'migration_context' on Instance uuid 85a9c2c0-3a8d-44ce-954f-e106841e2068 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:55:10 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:10.323 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:55:09Z, description=, device_id=e5ed268f-4779-4471-8a91-330bc33a7a69, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e5afffd4-98ab-418c-b3d3-38daa2e2ed3f, ip_allocation=immediate, mac_address=fa:16:3e:c1:57:92, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:55:02Z, description=, dns_domain=, id=488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-664573168-network, port_security_enabled=True, project_id=2ac6a6009ea84eb99f60bd242e459002, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48703, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=874, status=ACTIVE, subnets=['99453764-9a6a-431a-83e7-2540619cee45'], tags=[], tenant_id=2ac6a6009ea84eb99f60bd242e459002, updated_at=2026-02-23T09:55:04Z, vlan_transparent=None, network_id=488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, port_security_enabled=False, project_id=2ac6a6009ea84eb99f60bd242e459002, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=917, status=DOWN, tags=[], tenant_id=2ac6a6009ea84eb99f60bd242e459002, updated_at=2026-02-23T09:55:10Z on network 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c#033[00m Feb 23 04:55:10 localhost nova_compute[280321]: 2026-02-23 09:55:10.330 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Feb 23 04:55:10 localhost nova_compute[280321]: 2026-02-23 09:55:10.330 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Ensure instance console log exists: /var/lib/nova/instances/85a9c2c0-3a8d-44ce-954f-e106841e2068/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 23 04:55:10 localhost nova_compute[280321]: 2026-02-23 09:55:10.330 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:10 localhost nova_compute[280321]: 2026-02-23 09:55:10.331 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:10 localhost nova_compute[280321]: 2026-02-23 09:55:10.331 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:10 localhost dnsmasq[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/addn_hosts - 1 addresses Feb 23 04:55:10 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/host Feb 23 04:55:10 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/opts Feb 23 04:55:10 localhost podman[309592]: 2026-02-23 09:55:10.509561373 +0000 UTC m=+0.046715369 container kill d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:55:10 localhost systemd[1]: tmp-crun.8Y9Eqj.mount: Deactivated successfully. Feb 23 04:55:10 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:10.726 263679 INFO neutron.agent.dhcp.agent [None req-982f4908-afad-494f-a6fe-88c605c205e5 - - - - - -] DHCP configuration for ports {'e5afffd4-98ab-418c-b3d3-38daa2e2ed3f'} is completed#033[00m Feb 23 04:55:10 localhost nova_compute[280321]: 2026-02-23 09:55:10.774 280325 DEBUG nova.network.neutron [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Successfully created port: d9173d59-2a74-47ac-9a53-29ece647303c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m Feb 23 04:55:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 163 MiB data, 780 MiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 1010 KiB/s wr, 2 op/s Feb 23 04:55:12 localhost nova_compute[280321]: 2026-02-23 09:55:12.469 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:12 localhost podman[241086]: time="2026-02-23T09:55:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:55:12 localhost podman[241086]: @ - - [23/Feb/2026:09:55:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1" Feb 23 04:55:12 localhost podman[241086]: @ - - [23/Feb/2026:09:55:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18284 "" "Go-http-client/1.1" Feb 23 04:55:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 192 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s Feb 23 04:55:13 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:13.605 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:55:09Z, description=, device_id=e5ed268f-4779-4471-8a91-330bc33a7a69, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e5afffd4-98ab-418c-b3d3-38daa2e2ed3f, ip_allocation=immediate, mac_address=fa:16:3e:c1:57:92, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:55:02Z, description=, dns_domain=, id=488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-664573168-network, port_security_enabled=True, project_id=2ac6a6009ea84eb99f60bd242e459002, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48703, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=874, status=ACTIVE, subnets=['99453764-9a6a-431a-83e7-2540619cee45'], tags=[], tenant_id=2ac6a6009ea84eb99f60bd242e459002, updated_at=2026-02-23T09:55:04Z, vlan_transparent=None, network_id=488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, port_security_enabled=False, project_id=2ac6a6009ea84eb99f60bd242e459002, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=917, status=DOWN, tags=[], tenant_id=2ac6a6009ea84eb99f60bd242e459002, updated_at=2026-02-23T09:55:10Z on network 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c#033[00m Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.684 280325 DEBUG nova.network.neutron [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Successfully updated port: d9173d59-2a74-47ac-9a53-29ece647303c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.704 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Acquiring lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.705 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Acquired lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.706 280325 DEBUG nova.network.neutron [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.748 280325 DEBUG nova.compute.manager [req-77145b69-8372-49b0-b68e-7da068dbb183 req-60bde632-61cc-4753-ad71-4b1e17ef8926 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Received event network-changed-d9173d59-2a74-47ac-9a53-29ece647303c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.748 280325 DEBUG nova.compute.manager [req-77145b69-8372-49b0-b68e-7da068dbb183 req-60bde632-61cc-4753-ad71-4b1e17ef8926 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Refreshing instance network info cache due to event network-changed-d9173d59-2a74-47ac-9a53-29ece647303c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.749 280325 DEBUG oslo_concurrency.lockutils [req-77145b69-8372-49b0-b68e-7da068dbb183 req-60bde632-61cc-4753-ad71-4b1e17ef8926 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.793 280325 DEBUG nova.network.neutron [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 23 04:55:13 localhost dnsmasq[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/addn_hosts - 1 addresses Feb 23 04:55:13 localhost podman[309631]: 2026-02-23 09:55:13.823099913 +0000 UTC m=+0.071276961 container kill d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:55:13 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/host Feb 23 04:55:13 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/opts Feb 23 04:55:13 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:13.829 263679 INFO neutron.agent.linux.ip_lib [None req-0e132ab3-b86c-4cf9-a05b-49c9b9d0bb93 - - - - - -] Device tap3c059380-95 cannot be used as it has no MAC address#033[00m Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.901 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:13 localhost kernel: device tap3c059380-95 entered promiscuous mode Feb 23 04:55:13 localhost NetworkManager[5987]: [1771840513.9096] manager: (tap3c059380-95): new Generic device (/org/freedesktop/NetworkManager/Devices/23) Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.909 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:13 localhost ovn_controller[155966]: 2026-02-23T09:55:13Z|00081|binding|INFO|Claiming lport 3c059380-9509-4e38-9c35-5d03e3a0176a for this chassis. Feb 23 04:55:13 localhost ovn_controller[155966]: 2026-02-23T09:55:13Z|00082|binding|INFO|3c059380-9509-4e38-9c35-5d03e3a0176a: Claiming unknown Feb 23 04:55:13 localhost systemd-udevd[309656]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:55:13 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:13.923 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-6374b34b-9b04-4cdf-80c7-26c5c5e0e257', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6374b34b-9b04-4cdf-80c7-26c5c5e0e257', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1349075215be49eda0b375e59aa77e22', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe16e6e5-b377-42f5-abd5-19876370c4f4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3c059380-9509-4e38-9c35-5d03e3a0176a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:13 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:13.925 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 3c059380-9509-4e38-9c35-5d03e3a0176a in datapath 6374b34b-9b04-4cdf-80c7-26c5c5e0e257 bound to our chassis#033[00m Feb 23 04:55:13 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:13.928 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6374b34b-9b04-4cdf-80c7-26c5c5e0e257 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:13 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:13.929 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0f1baf-6244-40cb-80d9-eb29cbc28dc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:13 localhost journal[229268]: ethtool ioctl error on tap3c059380-95: No such device Feb 23 04:55:13 localhost journal[229268]: ethtool ioctl error on tap3c059380-95: No such device Feb 23 04:55:13 localhost journal[229268]: ethtool ioctl error on tap3c059380-95: No such device Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.943 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:13 localhost ovn_controller[155966]: 2026-02-23T09:55:13Z|00083|binding|INFO|Setting lport 3c059380-9509-4e38-9c35-5d03e3a0176a ovn-installed in OVS Feb 23 04:55:13 localhost ovn_controller[155966]: 2026-02-23T09:55:13Z|00084|binding|INFO|Setting lport 3c059380-9509-4e38-9c35-5d03e3a0176a up in Southbound Feb 23 04:55:13 localhost journal[229268]: ethtool ioctl error on tap3c059380-95: No such device Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.948 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.949 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:13 localhost journal[229268]: ethtool ioctl error on tap3c059380-95: No such device Feb 23 04:55:13 localhost journal[229268]: ethtool ioctl error on tap3c059380-95: No such device Feb 23 04:55:13 localhost journal[229268]: ethtool ioctl error on tap3c059380-95: No such device Feb 23 04:55:13 localhost journal[229268]: ethtool ioctl error on tap3c059380-95: No such device Feb 23 04:55:13 localhost nova_compute[280321]: 2026-02-23 09:55:13.983 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.011 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:14 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:14.085 263679 INFO neutron.agent.dhcp.agent [None req-6c32fe90-a207-4a8d-bc66-e86929a46bff - - - - - -] DHCP configuration for ports {'e5afffd4-98ab-418c-b3d3-38daa2e2ed3f'} is completed#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.586 280325 DEBUG nova.network.neutron [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Updating instance_info_cache with network_info: [{"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.609 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Releasing lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.609 280325 DEBUG nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Instance network_info: |[{"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.610 280325 DEBUG oslo_concurrency.lockutils [req-77145b69-8372-49b0-b68e-7da068dbb183 req-60bde632-61cc-4753-ad71-4b1e17ef8926 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquired lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.610 280325 DEBUG nova.network.neutron [req-77145b69-8372-49b0-b68e-7da068dbb183 req-60bde632-61cc-4753-ad71-4b1e17ef8926 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Refreshing network info cache for port d9173d59-2a74-47ac-9a53-29ece647303c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.613 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Start _get_guest_xml network_info=[{"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-23T09:52:33Z,direct_url=,disk_format='qcow2',id=d08f8876-d97b-493b-b16b-caf91668eecb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='37b8098efb0d4ecc90b451a2db0e966f',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-23T09:52:35Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'd08f8876-d97b-493b-b16b-caf91668eecb'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.618 280325 WARNING nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.620 280325 DEBUG nova.virt.libvirt.host [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Searching host: 'np0005626465.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.621 280325 DEBUG nova.virt.libvirt.host [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.622 280325 DEBUG nova.virt.libvirt.host [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Searching host: 'np0005626465.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.623 280325 DEBUG nova.virt.libvirt.host [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.623 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.624 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-23T09:52:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='dd9292ba-25cb-4da3-92e1-803e436b1b2c',id=6,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-23T09:52:33Z,direct_url=,disk_format='qcow2',id=d08f8876-d97b-493b-b16b-caf91668eecb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='37b8098efb0d4ecc90b451a2db0e966f',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-23T09:52:35Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.624 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.625 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.625 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.625 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.626 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.626 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.626 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.627 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.627 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.627 280325 DEBUG nova.virt.hardware [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.631 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:55:14 localhost nova_compute[280321]: 2026-02-23 09:55:14.671 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:14 localhost podman[309749]: Feb 23 04:55:14 localhost podman[309749]: 2026-02-23 09:55:14.877916087 +0000 UTC m=+0.092749937 container create cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6374b34b-9b04-4cdf-80c7-26c5c5e0e257, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:55:14 localhost systemd[1]: Started libpod-conmon-cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08.scope. Feb 23 04:55:14 localhost podman[309749]: 2026-02-23 09:55:14.834104658 +0000 UTC m=+0.048938488 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:55:14 localhost systemd[1]: Started libcrun container. Feb 23 04:55:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777dc6616f1a0cded416a3e4606e32c60bcd61b4ceb43eaea09cb774b1eeecf8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:14 localhost podman[309749]: 2026-02-23 09:55:14.962198853 +0000 UTC m=+0.177032643 container init cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6374b34b-9b04-4cdf-80c7-26c5c5e0e257, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:55:14 localhost podman[309749]: 2026-02-23 09:55:14.97156429 +0000 UTC m=+0.186398080 container start cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6374b34b-9b04-4cdf-80c7-26c5c5e0e257, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true) Feb 23 04:55:14 localhost dnsmasq[309767]: started, version 2.85 cachesize 150 Feb 23 04:55:14 localhost dnsmasq[309767]: DNS service limited to local subnets Feb 23 04:55:14 localhost dnsmasq[309767]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:55:14 localhost dnsmasq[309767]: warning: no upstream servers configured Feb 23 04:55:14 localhost dnsmasq-dhcp[309767]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:55:14 localhost dnsmasq[309767]: read /var/lib/neutron/dhcp/6374b34b-9b04-4cdf-80c7-26c5c5e0e257/addn_hosts - 0 addresses Feb 23 04:55:14 localhost dnsmasq-dhcp[309767]: read /var/lib/neutron/dhcp/6374b34b-9b04-4cdf-80c7-26c5c5e0e257/host Feb 23 04:55:14 localhost dnsmasq-dhcp[309767]: read /var/lib/neutron/dhcp/6374b34b-9b04-4cdf-80c7-26c5c5e0e257/opts Feb 23 04:55:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 23 04:55:15 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3506502717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.077 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:55:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 192 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.113 280325 DEBUG nova.storage.rbd_utils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] rbd image 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.117 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:55:15 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:15.146 263679 INFO neutron.agent.dhcp.agent [None req-445e21d1-f3e5-407f-9d40-c268180d9f91 - - - - - -] DHCP configuration for ports {'bcb5d77a-1ad2-4966-9165-8335753f41c8'} is completed#033[00m Feb 23 04:55:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 23 04:55:15 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/496496546' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.507 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.510 280325 DEBUG nova.virt.libvirt.vif [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-23T09:55:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(6),hidden=False,host='np0005626465.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF2rlDQqpE8gN/S5lFAyRLGYjpcIOcewybNn5UWV3V3SEazahuCHhiJUvS7fIbH3nnsHB7jxCAoFyueFoR4fMctfTxp9VYvlIgSdOtHAyy+XSsU2Yw/KKnx0uI2GDyViFg==',key_name='tempest-keypair-15325135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005626465.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005626465.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f67087411544c55a9225236eb297b90',ramdisk_id='',reservation_id='r-5x952hk6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1720923751',owner_user_name='tempest-ServersV294TestFqdnHostnames-1720923751-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-23T09:55:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2b38675f57640819bf191ad8152e7cb',uuid=85a9c2c0-3a8d-44ce-954f-e106841e2068,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.511 280325 DEBUG nova.network.os_vif_util [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Converting VIF {"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.512 280325 DEBUG nova.network.os_vif_util [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:ba:fc,bridge_name='br-int',has_traffic_filtering=True,id=d9173d59-2a74-47ac-9a53-29ece647303c,network=Network(55691124-ab57-4829-87d9-12148e1fa008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9173d59-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.515 280325 DEBUG nova.objects.instance [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lazy-loading 'pci_devices' on Instance uuid 85a9c2c0-3a8d-44ce-954f-e106841e2068 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.536 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] End _get_guest_xml xml= Feb 23 04:55:15 localhost nova_compute[280321]: 85a9c2c0-3a8d-44ce-954f-e106841e2068 Feb 23 04:55:15 localhost nova_compute[280321]: instance-0000000a Feb 23 04:55:15 localhost nova_compute[280321]: 131072 Feb 23 04:55:15 localhost nova_compute[280321]: 1 Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: guest-instance-1 Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:14 Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: 128 Feb 23 04:55:15 localhost nova_compute[280321]: 1 Feb 23 04:55:15 localhost nova_compute[280321]: 0 Feb 23 04:55:15 localhost nova_compute[280321]: 0 Feb 23 04:55:15 localhost nova_compute[280321]: 1 Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: tempest-ServersV294TestFqdnHostnames-1720923751-project-member Feb 23 04:55:15 localhost nova_compute[280321]: tempest-ServersV294TestFqdnHostnames-1720923751 Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: RDO Feb 23 04:55:15 localhost nova_compute[280321]: OpenStack Compute Feb 23 04:55:15 localhost nova_compute[280321]: 27.5.2-0.20260220085704.5cfeecb.el9 Feb 23 04:55:15 localhost nova_compute[280321]: 85a9c2c0-3a8d-44ce-954f-e106841e2068 Feb 23 04:55:15 localhost nova_compute[280321]: 85a9c2c0-3a8d-44ce-954f-e106841e2068 Feb 23 04:55:15 localhost nova_compute[280321]: Virtual Machine Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: hvm Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: /dev/urandom Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: Feb 23 04:55:15 localhost nova_compute[280321]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.537 280325 DEBUG nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Preparing to wait for external event network-vif-plugged-d9173d59-2a74-47ac-9a53-29ece647303c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.537 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Acquiring lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.538 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.538 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.539 280325 DEBUG nova.virt.libvirt.vif [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-23T09:55:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(6),hidden=False,host='np0005626465.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF2rlDQqpE8gN/S5lFAyRLGYjpcIOcewybNn5UWV3V3SEazahuCHhiJUvS7fIbH3nnsHB7jxCAoFyueFoR4fMctfTxp9VYvlIgSdOtHAyy+XSsU2Yw/KKnx0uI2GDyViFg==',key_name='tempest-keypair-15325135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005626465.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005626465.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f67087411544c55a9225236eb297b90',ramdisk_id='',reservation_id='r-5x952hk6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1720923751',owner_user_name='tempest-ServersV294TestFqdnHostnames-1720923751-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-23T09:55:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2b38675f57640819bf191ad8152e7cb',uuid=85a9c2c0-3a8d-44ce-954f-e106841e2068,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.540 280325 DEBUG nova.network.os_vif_util [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Converting VIF {"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.541 280325 DEBUG nova.network.os_vif_util [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:ba:fc,bridge_name='br-int',has_traffic_filtering=True,id=d9173d59-2a74-47ac-9a53-29ece647303c,network=Network(55691124-ab57-4829-87d9-12148e1fa008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9173d59-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.541 280325 DEBUG os_vif [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:ba:fc,bridge_name='br-int',has_traffic_filtering=True,id=d9173d59-2a74-47ac-9a53-29ece647303c,network=Network(55691124-ab57-4829-87d9-12148e1fa008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9173d59-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.542 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.543 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.543 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.548 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.548 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9173d59-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.549 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9173d59-2a, col_values=(('external_ids', {'iface-id': 'd9173d59-2a74-47ac-9a53-29ece647303c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:ba:fc', 'vm-uuid': '85a9c2c0-3a8d-44ce-954f-e106841e2068'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.551 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.554 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.557 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.558 280325 INFO os_vif [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:ba:fc,bridge_name='br-int',has_traffic_filtering=True,id=d9173d59-2a74-47ac-9a53-29ece647303c,network=Network(55691124-ab57-4829-87d9-12148e1fa008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9173d59-2a')#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.562 280325 DEBUG nova.network.neutron [req-77145b69-8372-49b0-b68e-7da068dbb183 req-60bde632-61cc-4753-ad71-4b1e17ef8926 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Updated VIF entry in instance network info cache for port d9173d59-2a74-47ac-9a53-29ece647303c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.563 280325 DEBUG nova.network.neutron [req-77145b69-8372-49b0-b68e-7da068dbb183 req-60bde632-61cc-4753-ad71-4b1e17ef8926 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Updating instance_info_cache with network_info: [{"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.583 280325 DEBUG oslo_concurrency.lockutils [req-77145b69-8372-49b0-b68e-7da068dbb183 req-60bde632-61cc-4753-ad71-4b1e17ef8926 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Releasing lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:55:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.603 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.604 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.604 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] No VIF found with MAC fa:16:3e:4b:ba:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.605 280325 INFO nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Using config drive#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.645 280325 DEBUG nova.storage.rbd_utils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] rbd image 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:55:15 localhost podman[309813]: 2026-02-23 09:55:15.689905518 +0000 UTC m=+0.089319931 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:55:15 localhost podman[309813]: 2026-02-23 09:55:15.729825628 +0000 UTC m=+0.129240001 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:55:15 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.893 280325 INFO nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Creating config drive at /var/lib/nova/instances/85a9c2c0-3a8d-44ce-954f-e106841e2068/disk.config#033[00m Feb 23 04:55:15 localhost nova_compute[280321]: 2026-02-23 09:55:15.900 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85a9c2c0-3a8d-44ce-954f-e106841e2068/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwfkc4apx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.027 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85a9c2c0-3a8d-44ce-954f-e106841e2068/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwfkc4apx" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:55:16 localhost ovn_controller[155966]: 2026-02-23T09:55:16Z|00085|ovn_bfd|INFO|Enabled BFD on interface ovn-5b0126-0 Feb 23 04:55:16 localhost ovn_controller[155966]: 2026-02-23T09:55:16Z|00086|ovn_bfd|INFO|Enabled BFD on interface ovn-585d62-0 Feb 23 04:55:16 localhost ovn_controller[155966]: 2026-02-23T09:55:16Z|00087|ovn_bfd|INFO|Enabled BFD on interface ovn-b9c72d-0 Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.166 280325 DEBUG nova.storage.rbd_utils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] rbd image 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.171 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85a9c2c0-3a8d-44ce-954f-e106841e2068/disk.config 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.186 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.245 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.393 280325 DEBUG oslo_concurrency.processutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85a9c2c0-3a8d-44ce-954f-e106841e2068/disk.config 85a9c2c0-3a8d-44ce-954f-e106841e2068_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.394 280325 INFO nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Deleting local config drive /var/lib/nova/instances/85a9c2c0-3a8d-44ce-954f-e106841e2068/disk.config because it was imported into RBD.#033[00m Feb 23 04:55:16 localhost kernel: device tapd9173d59-2a entered promiscuous mode Feb 23 04:55:16 localhost systemd-udevd[309659]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:55:16 localhost NetworkManager[5987]: [1771840516.4478] manager: (tapd9173d59-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/24) Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.451 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:16 localhost ovn_controller[155966]: 2026-02-23T09:55:16Z|00088|binding|INFO|Claiming lport d9173d59-2a74-47ac-9a53-29ece647303c for this chassis. Feb 23 04:55:16 localhost ovn_controller[155966]: 2026-02-23T09:55:16Z|00089|binding|INFO|d9173d59-2a74-47ac-9a53-29ece647303c: Claiming fa:16:3e:4b:ba:fc 10.100.0.5 Feb 23 04:55:16 localhost NetworkManager[5987]: [1771840516.4629] device (tapd9173d59-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 23 04:55:16 localhost NetworkManager[5987]: [1771840516.4644] device (tapd9173d59-2a): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.468 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:16 localhost systemd-machined[205673]: New machine qemu-3-instance-0000000a. Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.482 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:ba:fc 10.100.0.5'], port_security=['fa:16:3e:4b:ba:fc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85a9c2c0-3a8d-44ce-954f-e106841e2068', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55691124-ab57-4829-87d9-12148e1fa008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f67087411544c55a9225236eb297b90', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd03de417-eb2e-47e8-ad59-eae56add5dd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c755659-d501-4122-bfeb-a7f481d4a11a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=d9173d59-2a74-47ac-9a53-29ece647303c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.484 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d9173d59-2a74-47ac-9a53-29ece647303c in datapath 55691124-ab57-4829-87d9-12148e1fa008 bound to our chassis#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.487 161842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55691124-ab57-4829-87d9-12148e1fa008#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.495 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebdd4cd-e3b3-4116-93d0-262b6d245b0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.496 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap55691124-a1 in ovnmeta-55691124-ab57-4829-87d9-12148e1fa008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.498 306186 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap55691124-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.498 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[6863a79a-dbd4-48de-8ceb-f2492420364b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.499 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[51e2799e-acb2-4394-891f-c41cff583316]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_controller[155966]: 2026-02-23T09:55:16Z|00090|binding|INFO|Setting lport d9173d59-2a74-47ac-9a53-29ece647303c ovn-installed in OVS Feb 23 04:55:16 localhost ovn_controller[155966]: 2026-02-23T09:55:16Z|00091|binding|INFO|Setting lport d9173d59-2a74-47ac-9a53-29ece647303c up in Southbound Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.506 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:16 localhost systemd[1]: Started Virtual Machine qemu-3-instance-0000000a. Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.512 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7790db-b35d-47f7-91af-ff9d5e1276cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.534 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[af902e89-e214-42a8-9618-1846e27e6248]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.559 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[f353756e-ec25-4451-b22d-ea499468b041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost NetworkManager[5987]: [1771840516.5655] manager: (tap55691124-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/25) Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.564 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3155bc-47fc-41ad-8370-933d36dafa49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.600 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[14924fbb-2597-4008-8bff-09c30839b225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.605 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[075bf256-b44b-4b6b-8d39-976a8d36832e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap55691124-a1: link becomes ready Feb 23 04:55:16 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap55691124-a0: link becomes ready Feb 23 04:55:16 localhost NetworkManager[5987]: [1771840516.6289] device (tap55691124-a0): carrier: link connected Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.632 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6facb9-ebe9-4f2f-8b7e-7977d592c176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost sshd[309950]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.651 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[4c97312c-a9b6-46ca-9a7d-15a4c5a4416c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55691124-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e8:fb:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1194777, 'reachable_time': 41664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309955, 'error': None, 'target': 'ovnmeta-55691124-ab57-4829-87d9-12148e1fa008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.667 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[04e2f327-9218-4011-8492-14f3c48ae330]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:fb57'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1194777, 'tstamp': 1194777}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309973, 'error': None, 'target': 'ovnmeta-55691124-ab57-4829-87d9-12148e1fa008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.685 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f57c31-48d5-42f5-9a6c-4256d36b379f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55691124-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e8:fb:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1194777, 'reachable_time': 41664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309978, 'error': None, 'target': 'ovnmeta-55691124-ab57-4829-87d9-12148e1fa008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.713 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[6c878538-56cf-48e6-9222-6b56a3298dbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.771 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[e76be57c-d5fe-41f9-96a4-9635e5c39b8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.773 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55691124-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.773 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.773 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55691124-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:16 localhost kernel: device tap55691124-a0 entered promiscuous mode Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.779 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55691124-a0, col_values=(('external_ids', {'iface-id': 'e231ecd8-0ed4-4a64-9851-e1b9a6d545a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.776 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:16 localhost ovn_controller[155966]: 2026-02-23T09:55:16Z|00092|binding|INFO|Releasing lport e231ecd8-0ed4-4a64-9851-e1b9a6d545a2 from this chassis (sb_readonly=0) Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.789 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.790 161842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/55691124-ab57-4829-87d9-12148e1fa008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/55691124-ab57-4829-87d9-12148e1fa008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.790 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.791 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[459a82cc-d4bf-49e2-9d70-ceb99c34512a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.792 161842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: global Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: log /dev/log local0 debug Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: log-tag haproxy-metadata-proxy-55691124-ab57-4829-87d9-12148e1fa008 Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: user root Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: group root Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: maxconn 1024 Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: pidfile /var/lib/neutron/external/pids/55691124-ab57-4829-87d9-12148e1fa008.pid.haproxy Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: daemon Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: defaults Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: log global Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: mode http Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: option httplog Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: option dontlognull Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: option http-server-close Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: option forwardfor Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: retries 3 Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: timeout http-request 30s Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: timeout connect 30s Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: timeout client 32s Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: timeout server 32s Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: timeout http-keep-alive 30s Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: listen listener Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: bind 169.254.169.254:80 Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: server metadata /var/lib/neutron/metadata_proxy Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: http-request add-header X-OVN-Network-ID 55691124-ab57-4829-87d9-12148e1fa008 Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 23 04:55:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:16.793 161842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-55691124-ab57-4829-87d9-12148e1fa008', 'env', 'PROCESS_TAG=haproxy-55691124-ab57-4829-87d9-12148e1fa008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/55691124-ab57-4829-87d9-12148e1fa008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.804 280325 DEBUG nova.compute.manager [req-e7233fdb-a32f-4ac8-8707-4e8d0fa55494 req-82919ce4-d342-4494-ba26-6a03af65942c 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Received event network-vif-plugged-d9173d59-2a74-47ac-9a53-29ece647303c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.804 280325 DEBUG oslo_concurrency.lockutils [req-e7233fdb-a32f-4ac8-8707-4e8d0fa55494 req-82919ce4-d342-4494-ba26-6a03af65942c 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.804 280325 DEBUG oslo_concurrency.lockutils [req-e7233fdb-a32f-4ac8-8707-4e8d0fa55494 req-82919ce4-d342-4494-ba26-6a03af65942c 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.805 280325 DEBUG oslo_concurrency.lockutils [req-e7233fdb-a32f-4ac8-8707-4e8d0fa55494 req-82919ce4-d342-4494-ba26-6a03af65942c 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.805 280325 DEBUG nova.compute.manager [req-e7233fdb-a32f-4ac8-8707-4e8d0fa55494 req-82919ce4-d342-4494-ba26-6a03af65942c 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Processing event network-vif-plugged-d9173d59-2a74-47ac-9a53-29ece647303c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.888 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.893 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] VM Started (Lifecycle Event)#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.895 280325 DEBUG nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.898 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.902 280325 INFO nova.virt.libvirt.driver [-] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Instance spawned successfully.#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.903 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.920 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.923 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.940 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.940 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.941 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.942 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.943 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.943 280325 DEBUG nova.virt.libvirt.driver [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 23 04:55:16 localhost nova_compute[280321]: 2026-02-23 09:55:16.977 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:16 localhost ovn_controller[155966]: 2026-02-23T09:55:16Z|00093|binding|INFO|Releasing lport e231ecd8-0ed4-4a64-9851-e1b9a6d545a2 from this chassis (sb_readonly=0) Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.000 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.001 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.002 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] VM Paused (Lifecycle Event)#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.030 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.035 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.035 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] VM Resumed (Lifecycle Event)#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.067 280325 INFO nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Took 7.81 seconds to spawn the instance on the hypervisor.#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.072 280325 DEBUG nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.075 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:55:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 192 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s Feb 23 04:55:17 localhost ovn_controller[155966]: 2026-02-23T09:55:17Z|00094|binding|INFO|Releasing lport e231ecd8-0ed4-4a64-9851-e1b9a6d545a2 from this chassis (sb_readonly=0) Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.113 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.122 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.163 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.201 280325 INFO nova.compute.manager [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Took 8.78 seconds to build instance.#033[00m Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.212 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:17 localhost ovn_controller[155966]: 2026-02-23T09:55:17Z|00095|binding|INFO|Releasing lport e231ecd8-0ed4-4a64-9851-e1b9a6d545a2 from this chassis (sb_readonly=0) Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.217 280325 DEBUG oslo_concurrency.lockutils [None req-54148787-22c3-403e-9c83-5d532e459a95 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 8.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:17 localhost podman[310065]: Feb 23 04:55:17 localhost podman[310065]: 2026-02-23 09:55:17.243058617 +0000 UTC m=+0.102693741 container create 75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:55:17 localhost podman[310065]: 2026-02-23 09:55:17.189004564 +0000 UTC m=+0.048639728 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 23 04:55:17 localhost systemd[1]: Started libpod-conmon-75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172.scope. Feb 23 04:55:17 localhost systemd[1]: Started libcrun container. Feb 23 04:55:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27e01a5fc85525d1df6e928254fe7dde10ae3df2ce82cf3611335bfd6b1a2ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:17 localhost podman[310065]: 2026-02-23 09:55:17.336351838 +0000 UTC m=+0.195986962 container init 75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 23 04:55:17 localhost podman[310065]: 2026-02-23 09:55:17.351071888 +0000 UTC m=+0.210707012 container start 75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Feb 23 04:55:17 localhost neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008[310088]: [NOTICE] (310094) : New worker (310098) forked Feb 23 04:55:17 localhost neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008[310088]: [NOTICE] (310094) : Loading success. Feb 23 04:55:17 localhost nova_compute[280321]: 2026-02-23 09:55:17.473 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:55:17 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:55:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:55:17 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:55:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:55:17 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 6fee9475-f573-456a-9752-0063b2aa20c5 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:55:17 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 6fee9475-f573-456a-9752-0063b2aa20c5 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:55:17 localhost ceph-mgr[285904]: [progress INFO root] Completed event 6fee9475-f573-456a-9752-0063b2aa20c5 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 04:55:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:55:17 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:55:17 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:55:17 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:55:17 localhost podman[310150]: 2026-02-23 09:55:17.932845042 +0000 UTC m=+0.063458501 container kill cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6374b34b-9b04-4cdf-80c7-26c5c5e0e257, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:55:17 localhost dnsmasq[309767]: exiting on receipt of SIGTERM Feb 23 04:55:17 localhost systemd[1]: libpod-cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08.scope: Deactivated successfully. Feb 23 04:55:18 localhost podman[310161]: 2026-02-23 09:55:18.005855404 +0000 UTC m=+0.060069417 container died cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6374b34b-9b04-4cdf-80c7-26c5c5e0e257, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:55:18 localhost podman[310161]: 2026-02-23 09:55:18.048171247 +0000 UTC m=+0.102385220 container cleanup cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6374b34b-9b04-4cdf-80c7-26c5c5e0e257, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:55:18 localhost systemd[1]: libpod-conmon-cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08.scope: Deactivated successfully. Feb 23 04:55:18 localhost podman[310163]: 2026-02-23 09:55:18.085130287 +0000 UTC m=+0.132040697 container remove cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6374b34b-9b04-4cdf-80c7-26c5c5e0e257, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 04:55:18 localhost ovn_controller[155966]: 2026-02-23T09:55:18Z|00096|binding|INFO|Releasing lport 3c059380-9509-4e38-9c35-5d03e3a0176a from this chassis (sb_readonly=0) Feb 23 04:55:18 localhost ovn_controller[155966]: 2026-02-23T09:55:18Z|00097|binding|INFO|Setting lport 3c059380-9509-4e38-9c35-5d03e3a0176a down in Southbound Feb 23 04:55:18 localhost kernel: device tap3c059380-95 left promiscuous mode Feb 23 04:55:18 localhost nova_compute[280321]: 2026-02-23 09:55:18.100 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:18 localhost nova_compute[280321]: 2026-02-23 09:55:18.129 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:18 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:18.140 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-6374b34b-9b04-4cdf-80c7-26c5c5e0e257', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6374b34b-9b04-4cdf-80c7-26c5c5e0e257', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1349075215be49eda0b375e59aa77e22', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe16e6e5-b377-42f5-abd5-19876370c4f4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3c059380-9509-4e38-9c35-5d03e3a0176a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:18 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:18.143 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 3c059380-9509-4e38-9c35-5d03e3a0176a in datapath 6374b34b-9b04-4cdf-80c7-26c5c5e0e257 unbound from our chassis#033[00m Feb 23 04:55:18 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:18.146 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6374b34b-9b04-4cdf-80c7-26c5c5e0e257, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:55:18 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:18.147 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[bedcab07-76cb-431d-8fe8-8cc151ca8053]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:18 localhost systemd[1]: tmp-crun.Q0cnKd.mount: Deactivated successfully. Feb 23 04:55:18 localhost systemd[1]: var-lib-containers-storage-overlay-777dc6616f1a0cded416a3e4606e32c60bcd61b4ceb43eaea09cb774b1eeecf8-merged.mount: Deactivated successfully. Feb 23 04:55:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd2df2b07db4a406d9833050fe608f8892a8a29efc5d4a9417fd246517087c08-userdata-shm.mount: Deactivated successfully. Feb 23 04:55:18 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:18.839 263679 INFO neutron.agent.dhcp.agent [None req-643f4967-b035-47f5-9143-aeb6ffa09636 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:18 localhost systemd[1]: run-netns-qdhcp\x2d6374b34b\x2d9b04\x2d4cdf\x2d80c7\x2d26c5c5e0e257.mount: Deactivated successfully. Feb 23 04:55:18 localhost nova_compute[280321]: 2026-02-23 09:55:18.905 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:55:18 localhost nova_compute[280321]: 2026-02-23 09:55:18.906 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:55:18 localhost nova_compute[280321]: 2026-02-23 09:55:18.938 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:18 localhost nova_compute[280321]: 2026-02-23 09:55:18.939 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:18 localhost nova_compute[280321]: 2026-02-23 09:55:18.939 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:18 localhost nova_compute[280321]: 2026-02-23 09:55:18.939 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:55:18 localhost nova_compute[280321]: 2026-02-23 09:55:18.940 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.001 280325 DEBUG nova.compute.manager [req-889dd48c-0256-4506-bb12-2e220e90c883 req-2866ae84-61fe-4748-a42d-32888b2304d6 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Received event network-vif-plugged-d9173d59-2a74-47ac-9a53-29ece647303c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.002 280325 DEBUG oslo_concurrency.lockutils [req-889dd48c-0256-4506-bb12-2e220e90c883 req-2866ae84-61fe-4748-a42d-32888b2304d6 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.003 280325 DEBUG oslo_concurrency.lockutils [req-889dd48c-0256-4506-bb12-2e220e90c883 req-2866ae84-61fe-4748-a42d-32888b2304d6 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.003 280325 DEBUG oslo_concurrency.lockutils [req-889dd48c-0256-4506-bb12-2e220e90c883 req-2866ae84-61fe-4748-a42d-32888b2304d6 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.003 280325 DEBUG nova.compute.manager [req-889dd48c-0256-4506-bb12-2e220e90c883 req-2866ae84-61fe-4748-a42d-32888b2304d6 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] No waiting events found dispatching network-vif-plugged-d9173d59-2a74-47ac-9a53-29ece647303c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.004 280325 WARNING nova.compute.manager [req-889dd48c-0256-4506-bb12-2e220e90c883 req-2866ae84-61fe-4748-a42d-32888b2304d6 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Received unexpected event network-vif-plugged-d9173d59-2a74-47ac-9a53-29ece647303c for instance with vm_state active and task_state None.#033[00m Feb 23 04:55:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 192 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s Feb 23 04:55:19 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:55:19 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/970130502' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.387 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.501 280325 DEBUG nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.502 280325 DEBUG nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.680 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.682 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11540MB free_disk=41.77406692504883GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.682 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.682 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.798 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Instance 85a9c2c0-3a8d-44ce-954f-e106841e2068 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.799 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.800 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.868 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.889 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.929 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:19 localhost ovn_controller[155966]: 2026-02-23T09:55:19Z|00098|binding|INFO|Releasing lport e231ecd8-0ed4-4a64-9851-e1b9a6d545a2 from this chassis (sb_readonly=0) Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.943 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:19 localhost nova_compute[280321]: 2026-02-23 09:55:19.954 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:55:20 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4058803820' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.322 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.327 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.348 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:55:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.381 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.381 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:20 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:55:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:55:20 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:20.515 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.590 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.593 280325 DEBUG nova.compute.manager [req-151b1934-5942-479e-9267-7021e9757966 req-2de51cab-954e-479d-95fb-a78e0e0ca833 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Received event network-changed-d9173d59-2a74-47ac-9a53-29ece647303c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.594 280325 DEBUG nova.compute.manager [req-151b1934-5942-479e-9267-7021e9757966 req-2de51cab-954e-479d-95fb-a78e0e0ca833 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Refreshing instance network info cache due to event network-changed-d9173d59-2a74-47ac-9a53-29ece647303c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.594 280325 DEBUG oslo_concurrency.lockutils [req-151b1934-5942-479e-9267-7021e9757966 req-2de51cab-954e-479d-95fb-a78e0e0ca833 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.595 280325 DEBUG oslo_concurrency.lockutils [req-151b1934-5942-479e-9267-7021e9757966 req-2de51cab-954e-479d-95fb-a78e0e0ca833 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquired lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:55:20 localhost nova_compute[280321]: 2026-02-23 09:55:20.595 280325 DEBUG nova.network.neutron [req-151b1934-5942-479e-9267-7021e9757966 req-2de51cab-954e-479d-95fb-a78e0e0ca833 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Refreshing network info cache for port d9173d59-2a74-47ac-9a53-29ece647303c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Feb 23 04:55:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 192 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 70 op/s Feb 23 04:55:21 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:55:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:55:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:55:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:55:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:55:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:55:21 localhost nova_compute[280321]: 2026-02-23 09:55:21.374 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:55:21 localhost nova_compute[280321]: 2026-02-23 09:55:21.375 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:55:21 localhost nova_compute[280321]: 2026-02-23 09:55:21.376 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:55:21 localhost nova_compute[280321]: 2026-02-23 09:55:21.489 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:55:21 localhost nova_compute[280321]: 2026-02-23 09:55:21.582 280325 DEBUG nova.network.neutron [req-151b1934-5942-479e-9267-7021e9757966 req-2de51cab-954e-479d-95fb-a78e0e0ca833 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Updated VIF entry in instance network info cache for port d9173d59-2a74-47ac-9a53-29ece647303c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Feb 23 04:55:21 localhost nova_compute[280321]: 2026-02-23 09:55:21.582 280325 DEBUG nova.network.neutron [req-151b1934-5942-479e-9267-7021e9757966 req-2de51cab-954e-479d-95fb-a78e0e0ca833 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Updating instance_info_cache with network_info: [{"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:55:21 localhost nova_compute[280321]: 2026-02-23 09:55:21.627 280325 DEBUG oslo_concurrency.lockutils [req-151b1934-5942-479e-9267-7021e9757966 req-2de51cab-954e-479d-95fb-a78e0e0ca833 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Releasing lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:55:21 localhost nova_compute[280321]: 2026-02-23 09:55:21.628 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquired lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:55:21 localhost nova_compute[280321]: 2026-02-23 09:55:21.628 280325 DEBUG nova.network.neutron [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Feb 23 04:55:21 localhost nova_compute[280321]: 2026-02-23 09:55:21.629 280325 DEBUG nova.objects.instance [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 85a9c2c0-3a8d-44ce-954f-e106841e2068 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:55:22 localhost nova_compute[280321]: 2026-02-23 09:55:22.155 280325 DEBUG nova.network.neutron [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Updating instance_info_cache with network_info: [{"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:55:22 localhost nova_compute[280321]: 2026-02-23 09:55:22.183 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Releasing lock "refresh_cache-85a9c2c0-3a8d-44ce-954f-e106841e2068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:55:22 localhost nova_compute[280321]: 2026-02-23 09:55:22.183 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Feb 23 04:55:22 localhost nova_compute[280321]: 2026-02-23 09:55:22.184 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:55:22 localhost nova_compute[280321]: 2026-02-23 09:55:22.476 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:22 localhost nova_compute[280321]: 2026-02-23 09:55:22.698 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:55:22 localhost nova_compute[280321]: 2026-02-23 09:55:22.887 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:55:22 localhost nova_compute[280321]: 2026-02-23 09:55:22.917 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:55:22 localhost nova_compute[280321]: 2026-02-23 09:55:22.917 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:55:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 192 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 819 KiB/s wr, 97 op/s Feb 23 04:55:23 localhost nova_compute[280321]: 2026-02-23 09:55:23.702 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:23 localhost nova_compute[280321]: 2026-02-23 09:55:23.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:55:23 localhost nova_compute[280321]: 2026-02-23 09:55:23.891 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:55:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 192 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s Feb 23 04:55:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:25 localhost nova_compute[280321]: 2026-02-23 09:55:25.630 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:25 localhost nova_compute[280321]: 2026-02-23 09:55:25.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:55:26 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:26.315 263679 INFO neutron.agent.linux.ip_lib [None req-45bbf82d-68fa-499f-a2b8-6c980d418fc7 - - - - - -] Device tap4b58d478-6c cannot be used as it has no MAC address#033[00m Feb 23 04:55:26 localhost nova_compute[280321]: 2026-02-23 09:55:26.333 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:26 localhost kernel: device tap4b58d478-6c entered promiscuous mode Feb 23 04:55:26 localhost NetworkManager[5987]: [1771840526.3392] manager: (tap4b58d478-6c): new Generic device (/org/freedesktop/NetworkManager/Devices/26) Feb 23 04:55:26 localhost nova_compute[280321]: 2026-02-23 09:55:26.341 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:26 localhost ovn_controller[155966]: 2026-02-23T09:55:26Z|00099|binding|INFO|Claiming lport 4b58d478-6cb3-4ffe-b17c-87fc95ae13e5 for this chassis. Feb 23 04:55:26 localhost ovn_controller[155966]: 2026-02-23T09:55:26Z|00100|binding|INFO|4b58d478-6cb3-4ffe-b17c-87fc95ae13e5: Claiming unknown Feb 23 04:55:26 localhost systemd-udevd[310247]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:55:26 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:26.355 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-52f0a190-d529-4e0c-bebe-cbd94ee3d830', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52f0a190-d529-4e0c-bebe-cbd94ee3d830', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91db788359a945be921785f05bf8c883', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83979536-9c99-48bc-800a-bfe92bf16112, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4b58d478-6cb3-4ffe-b17c-87fc95ae13e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:26 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:26.356 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 4b58d478-6cb3-4ffe-b17c-87fc95ae13e5 in datapath 52f0a190-d529-4e0c-bebe-cbd94ee3d830 bound to our chassis#033[00m Feb 23 04:55:26 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:26.358 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 52f0a190-d529-4e0c-bebe-cbd94ee3d830 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:26 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:26.359 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[439a8bba-a451-4de2-8163-22d984430669]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:26 localhost journal[229268]: ethtool ioctl error on tap4b58d478-6c: No such device Feb 23 04:55:26 localhost journal[229268]: ethtool ioctl error on tap4b58d478-6c: No such device Feb 23 04:55:26 localhost journal[229268]: ethtool ioctl error on tap4b58d478-6c: No such device Feb 23 04:55:26 localhost journal[229268]: ethtool ioctl error on tap4b58d478-6c: No such device Feb 23 04:55:26 localhost nova_compute[280321]: 2026-02-23 09:55:26.383 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:26 localhost ovn_controller[155966]: 2026-02-23T09:55:26Z|00101|binding|INFO|Setting lport 4b58d478-6cb3-4ffe-b17c-87fc95ae13e5 ovn-installed in OVS Feb 23 04:55:26 localhost ovn_controller[155966]: 2026-02-23T09:55:26Z|00102|binding|INFO|Setting lport 4b58d478-6cb3-4ffe-b17c-87fc95ae13e5 up in Southbound Feb 23 04:55:26 localhost nova_compute[280321]: 2026-02-23 09:55:26.387 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:26 localhost journal[229268]: ethtool ioctl error on tap4b58d478-6c: No such device Feb 23 04:55:26 localhost journal[229268]: ethtool ioctl error on tap4b58d478-6c: No such device Feb 23 04:55:26 localhost journal[229268]: ethtool ioctl error on tap4b58d478-6c: No such device Feb 23 04:55:26 localhost journal[229268]: ethtool ioctl error on tap4b58d478-6c: No such device Feb 23 04:55:26 localhost nova_compute[280321]: 2026-02-23 09:55:26.438 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 201 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1009 KiB/s wr, 89 op/s Feb 23 04:55:27 localhost nova_compute[280321]: 2026-02-23 09:55:27.300 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:27 localhost podman[310319]: Feb 23 04:55:27 localhost podman[310319]: 2026-02-23 09:55:27.405476406 +0000 UTC m=+0.101512194 container create 2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-52f0a190-d529-4e0c-bebe-cbd94ee3d830, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, io.buildah.version=1.43.0) Feb 23 04:55:27 localhost systemd[1]: Started libpod-conmon-2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf.scope. Feb 23 04:55:27 localhost podman[310319]: 2026-02-23 09:55:27.358986376 +0000 UTC m=+0.055022224 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:55:27 localhost systemd[1]: tmp-crun.DwUd8w.mount: Deactivated successfully. Feb 23 04:55:27 localhost nova_compute[280321]: 2026-02-23 09:55:27.480 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:27 localhost systemd[1]: Started libcrun container. Feb 23 04:55:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e298a442e1eb8ea3faa57fee9fa884fbd6c3a8cdc87830c883513c9529df3c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:27 localhost podman[310319]: 2026-02-23 09:55:27.503824192 +0000 UTC m=+0.199860010 container init 2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-52f0a190-d529-4e0c-bebe-cbd94ee3d830, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216) Feb 23 04:55:27 localhost podman[310319]: 2026-02-23 09:55:27.513102867 +0000 UTC m=+0.209138675 container start 2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-52f0a190-d529-4e0c-bebe-cbd94ee3d830, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:55:27 localhost dnsmasq[310337]: started, version 2.85 cachesize 150 Feb 23 04:55:27 localhost dnsmasq[310337]: DNS service limited to local subnets Feb 23 04:55:27 localhost dnsmasq[310337]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:55:27 localhost dnsmasq[310337]: warning: no upstream servers configured Feb 23 04:55:27 localhost dnsmasq-dhcp[310337]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:55:27 localhost dnsmasq[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/addn_hosts - 0 addresses Feb 23 04:55:27 localhost dnsmasq-dhcp[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/host Feb 23 04:55:27 localhost dnsmasq-dhcp[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/opts Feb 23 04:55:27 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:27.742 263679 INFO neutron.agent.dhcp.agent [None req-d6a9c6e9-723f-42de-b7c3-d66d8250232a - - - - - -] DHCP configuration for ports {'a71e9436-d305-4865-96a6-b83f6ab6cd3e'} is completed#033[00m Feb 23 04:55:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:55:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:55:28 localhost systemd[1]: tmp-crun.SRg9qh.mount: Deactivated successfully. Feb 23 04:55:28 localhost podman[310339]: 2026-02-23 09:55:28.567748115 +0000 UTC m=+0.137791773 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 23 04:55:28 localhost podman[310339]: 2026-02-23 09:55:28.580266598 +0000 UTC m=+0.150310206 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.7, distribution-scope=public) Feb 23 04:55:28 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:55:28 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:28.601 2 INFO neutron.agent.securitygroups_rpc [None req-9a0eab4a-b1f8-4315-8f9f-526f0b4e43e4 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Security group member updated ['917bfa8c-752a-4a55-9acc-5ce6144207b4']#033[00m Feb 23 04:55:28 localhost podman[310338]: 2026-02-23 09:55:28.531542728 +0000 UTC m=+0.102987979 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:55:28 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:28.659 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:55:28Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=622abaa1-264f-4476-8fb5-acbc0c816b3f, ip_allocation=immediate, mac_address=fa:16:3e:77:96:92, name=tempest-parent-967314267, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:55:02Z, description=, dns_domain=, id=488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-664573168-network, port_security_enabled=True, project_id=2ac6a6009ea84eb99f60bd242e459002, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48703, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=874, status=ACTIVE, subnets=['99453764-9a6a-431a-83e7-2540619cee45'], tags=[], tenant_id=2ac6a6009ea84eb99f60bd242e459002, updated_at=2026-02-23T09:55:04Z, vlan_transparent=None, network_id=488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, port_security_enabled=True, project_id=2ac6a6009ea84eb99f60bd242e459002, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['917bfa8c-752a-4a55-9acc-5ce6144207b4'], standard_attr_id=1024, status=DOWN, tags=[], tenant_id=2ac6a6009ea84eb99f60bd242e459002, updated_at=2026-02-23T09:55:28Z on network 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c#033[00m Feb 23 04:55:28 localhost podman[310338]: 2026-02-23 09:55:28.667081152 +0000 UTC m=+0.238526443 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:55:28 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:55:28 localhost dnsmasq[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/addn_hosts - 2 addresses Feb 23 04:55:28 localhost podman[310391]: 2026-02-23 09:55:28.877070241 +0000 UTC m=+0.071656682 container kill d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:55:28 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/host Feb 23 04:55:28 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/opts Feb 23 04:55:28 localhost nova_compute[280321]: 2026-02-23 09:55:28.895 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 201 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 995 KiB/s wr, 80 op/s Feb 23 04:55:29 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:29.186 263679 INFO neutron.agent.dhcp.agent [None req-dd17d2f9-f7f6-4a51-9e8e-be0bd0088eb3 - - - - - -] DHCP configuration for ports {'622abaa1-264f-4476-8fb5-acbc0c816b3f'} is completed#033[00m Feb 23 04:55:29 localhost ovn_controller[155966]: 2026-02-23T09:55:29Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:ba:fc 10.100.0.5 Feb 23 04:55:29 localhost ovn_controller[155966]: 2026-02-23T09:55:29Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:ba:fc 10.100.0.5 Feb 23 04:55:30 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:30.196 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:55:29Z, description=, device_id=63547d1d-4ff1-44bd-8fd6-f4944296a489, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cadd8f14-4a68-4924-b877-a0db801e3614, ip_allocation=immediate, mac_address=fa:16:3e:59:ca:cf, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:55:24Z, description=, dns_domain=, id=52f0a190-d529-4e0c-bebe-cbd94ee3d830, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-855320230-network, port_security_enabled=True, project_id=91db788359a945be921785f05bf8c883, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29281, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=995, status=ACTIVE, subnets=['96db6357-bf3f-4dd0-beef-14b91aabb8b2'], tags=[], tenant_id=91db788359a945be921785f05bf8c883, updated_at=2026-02-23T09:55:25Z, vlan_transparent=None, network_id=52f0a190-d529-4e0c-bebe-cbd94ee3d830, port_security_enabled=False, project_id=91db788359a945be921785f05bf8c883, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1034, status=DOWN, tags=[], tenant_id=91db788359a945be921785f05bf8c883, updated_at=2026-02-23T09:55:29Z on network 52f0a190-d529-4e0c-bebe-cbd94ee3d830#033[00m Feb 23 04:55:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:30 localhost systemd[1]: tmp-crun.ZJB60C.mount: Deactivated successfully. Feb 23 04:55:30 localhost dnsmasq[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/addn_hosts - 1 addresses Feb 23 04:55:30 localhost dnsmasq-dhcp[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/host Feb 23 04:55:30 localhost dnsmasq-dhcp[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/opts Feb 23 04:55:30 localhost podman[310429]: 2026-02-23 09:55:30.416754636 +0000 UTC m=+0.054632880 container kill 2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-52f0a190-d529-4e0c-bebe-cbd94ee3d830, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:55:30 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:30.634 263679 INFO neutron.agent.dhcp.agent [None req-e1fc0b76-3241-4005-8f64-639271c86afd - - - - - -] DHCP configuration for ports {'cadd8f14-4a68-4924-b877-a0db801e3614'} is completed#033[00m Feb 23 04:55:30 localhost nova_compute[280321]: 2026-02-23 09:55:30.664 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:30 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:30.776 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:30 localhost nova_compute[280321]: 2026-02-23 09:55:30.776 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:30 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:30.778 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:55:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 213 MiB data, 860 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 92 op/s Feb 23 04:55:31 localhost openstack_network_exporter[243519]: ERROR 09:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:55:31 localhost openstack_network_exporter[243519]: Feb 23 04:55:31 localhost openstack_network_exporter[243519]: ERROR 09:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:55:31 localhost openstack_network_exporter[243519]: Feb 23 04:55:32 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:32.011 263679 INFO neutron.agent.linux.ip_lib [None req-ac34f11a-b8b0-4a35-b60f-2d1d25fa207d - - - - - -] Device tap3f6889f9-47 cannot be used as it has no MAC address#033[00m Feb 23 04:55:32 localhost nova_compute[280321]: 2026-02-23 09:55:32.034 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:32 localhost kernel: device tap3f6889f9-47 entered promiscuous mode Feb 23 04:55:32 localhost nova_compute[280321]: 2026-02-23 09:55:32.041 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:32 localhost NetworkManager[5987]: [1771840532.0419] manager: (tap3f6889f9-47): new Generic device (/org/freedesktop/NetworkManager/Devices/27) Feb 23 04:55:32 localhost ovn_controller[155966]: 2026-02-23T09:55:32Z|00103|binding|INFO|Claiming lport 3f6889f9-478c-43a9-a43d-d91e4cd588c8 for this chassis. Feb 23 04:55:32 localhost ovn_controller[155966]: 2026-02-23T09:55:32Z|00104|binding|INFO|3f6889f9-478c-43a9-a43d-d91e4cd588c8: Claiming unknown Feb 23 04:55:32 localhost systemd-udevd[310459]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:55:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:32.064 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ac6a6009ea84eb99f60bd242e459002', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10f6c850-d5b0-4b68-95e9-d2dc898e2718, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3f6889f9-478c-43a9-a43d-d91e4cd588c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:32 localhost journal[229268]: ethtool ioctl error on tap3f6889f9-47: No such device Feb 23 04:55:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:32.067 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 3f6889f9-478c-43a9-a43d-d91e4cd588c8 in datapath 9eb5761d-94a8-4798-bed6-a9e5cf6518df bound to our chassis#033[00m Feb 23 04:55:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:32.069 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9eb5761d-94a8-4798-bed6-a9e5cf6518df or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:32.070 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[30dede9a-715f-4857-b365-d233dfa0bc92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:32 localhost journal[229268]: ethtool ioctl error on tap3f6889f9-47: No such device Feb 23 04:55:32 localhost nova_compute[280321]: 2026-02-23 09:55:32.078 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:32 localhost journal[229268]: ethtool ioctl error on tap3f6889f9-47: No such device Feb 23 04:55:32 localhost ovn_controller[155966]: 2026-02-23T09:55:32Z|00105|binding|INFO|Setting lport 3f6889f9-478c-43a9-a43d-d91e4cd588c8 ovn-installed in OVS Feb 23 04:55:32 localhost ovn_controller[155966]: 2026-02-23T09:55:32Z|00106|binding|INFO|Setting lport 3f6889f9-478c-43a9-a43d-d91e4cd588c8 up in Southbound Feb 23 04:55:32 localhost nova_compute[280321]: 2026-02-23 09:55:32.083 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:32 localhost journal[229268]: ethtool ioctl error on tap3f6889f9-47: No such device Feb 23 04:55:32 localhost journal[229268]: ethtool ioctl error on tap3f6889f9-47: No such device Feb 23 04:55:32 localhost journal[229268]: ethtool ioctl error on tap3f6889f9-47: No such device Feb 23 04:55:32 localhost journal[229268]: ethtool ioctl error on tap3f6889f9-47: No such device Feb 23 04:55:32 localhost journal[229268]: ethtool ioctl error on tap3f6889f9-47: No such device Feb 23 04:55:32 localhost nova_compute[280321]: 2026-02-23 09:55:32.150 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:32 localhost nova_compute[280321]: 2026-02-23 09:55:32.483 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:55:33 localhost podman[310530]: Feb 23 04:55:33 localhost podman[310531]: 2026-02-23 09:55:33.015757984 +0000 UTC m=+0.086554617 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:55:33 localhost podman[310530]: 2026-02-23 09:55:33.026546594 +0000 UTC m=+0.099958157 container create 9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9eb5761d-94a8-4798-bed6-a9e5cf6518df, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:55:33 localhost podman[310530]: 2026-02-23 09:55:32.980871808 +0000 UTC m=+0.054283371 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:55:33 localhost systemd[1]: Started libpod-conmon-9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7.scope. Feb 23 04:55:33 localhost podman[310531]: 2026-02-23 09:55:33.086883838 +0000 UTC m=+0.157680461 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:55:33 localhost systemd[1]: tmp-crun.Ya39oa.mount: Deactivated successfully. Feb 23 04:55:33 localhost systemd[1]: Started libcrun container. Feb 23 04:55:33 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:55:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d9384683f823fde6176f594f20ad42025b4abff056b969bab3b55b515854220/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:33 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e108 e108: 6 total, 6 up, 6 in Feb 23 04:55:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 225 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 76 op/s Feb 23 04:55:33 localhost podman[310530]: 2026-02-23 09:55:33.12487796 +0000 UTC m=+0.198289523 container init 9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9eb5761d-94a8-4798-bed6-a9e5cf6518df, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, io.buildah.version=1.43.0) Feb 23 04:55:33 localhost podman[310530]: 2026-02-23 09:55:33.133575456 +0000 UTC m=+0.206987019 container start 9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9eb5761d-94a8-4798-bed6-a9e5cf6518df, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216) Feb 23 04:55:33 localhost dnsmasq[310573]: started, version 2.85 cachesize 150 Feb 23 04:55:33 localhost dnsmasq[310573]: DNS service limited to local subnets Feb 23 04:55:33 localhost dnsmasq[310573]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:55:33 localhost dnsmasq[310573]: warning: no upstream servers configured Feb 23 04:55:33 localhost dnsmasq-dhcp[310573]: DHCP, static leases only on 19.80.0.0, lease time 1d Feb 23 04:55:33 localhost dnsmasq[310573]: read /var/lib/neutron/dhcp/9eb5761d-94a8-4798-bed6-a9e5cf6518df/addn_hosts - 0 addresses Feb 23 04:55:33 localhost dnsmasq-dhcp[310573]: read /var/lib/neutron/dhcp/9eb5761d-94a8-4798-bed6-a9e5cf6518df/host Feb 23 04:55:33 localhost dnsmasq-dhcp[310573]: read /var/lib/neutron/dhcp/9eb5761d-94a8-4798-bed6-a9e5cf6518df/opts Feb 23 04:55:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:33.287 263679 INFO neutron.agent.dhcp.agent [None req-5822648f-4db0-427e-b5ce-dd06c891a767 - - - - - -] DHCP configuration for ports {'0a17f90d-6861-4307-9c4d-59966b232581'} is completed#033[00m Feb 23 04:55:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:33.405 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:55:29Z, description=, device_id=63547d1d-4ff1-44bd-8fd6-f4944296a489, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cadd8f14-4a68-4924-b877-a0db801e3614, ip_allocation=immediate, mac_address=fa:16:3e:59:ca:cf, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:55:24Z, description=, dns_domain=, id=52f0a190-d529-4e0c-bebe-cbd94ee3d830, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-855320230-network, port_security_enabled=True, project_id=91db788359a945be921785f05bf8c883, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29281, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=995, status=ACTIVE, subnets=['96db6357-bf3f-4dd0-beef-14b91aabb8b2'], tags=[], tenant_id=91db788359a945be921785f05bf8c883, updated_at=2026-02-23T09:55:25Z, vlan_transparent=None, network_id=52f0a190-d529-4e0c-bebe-cbd94ee3d830, port_security_enabled=False, project_id=91db788359a945be921785f05bf8c883, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1034, status=DOWN, tags=[], tenant_id=91db788359a945be921785f05bf8c883, updated_at=2026-02-23T09:55:29Z on network 52f0a190-d529-4e0c-bebe-cbd94ee3d830#033[00m Feb 23 04:55:33 localhost dnsmasq[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/addn_hosts - 1 addresses Feb 23 04:55:33 localhost dnsmasq-dhcp[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/host Feb 23 04:55:33 localhost dnsmasq-dhcp[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/opts Feb 23 04:55:33 localhost podman[310589]: 2026-02-23 09:55:33.639610165 +0000 UTC m=+0.058297803 container kill 2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-52f0a190-d529-4e0c-bebe-cbd94ee3d830, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:55:33 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:33.780 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:34 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:34.005 263679 INFO neutron.agent.dhcp.agent [None req-ba831057-0c91-4962-96bb-63ee675aa8a6 - - - - - -] DHCP configuration for ports {'cadd8f14-4a68-4924-b877-a0db801e3614'} is completed#033[00m Feb 23 04:55:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:55:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:55:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 225 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 76 op/s Feb 23 04:55:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e109 e109: 6 total, 6 up, 6 in Feb 23 04:55:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:55:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:55:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:55:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:55:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:35 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:35.592 2 INFO neutron.agent.securitygroups_rpc [None req-0cb29720-58ee-4ab0-99f8-69e7c954667c 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Security group member updated ['917bfa8c-752a-4a55-9acc-5ce6144207b4']#033[00m Feb 23 04:55:35 localhost nova_compute[280321]: 2026-02-23 09:55:35.666 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:35.942 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:55:35Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=adb72f75-7429-4a32-becc-fdcd334c7c29, ip_allocation=immediate, mac_address=fa:16:3e:c9:7a:38, name=tempest-subport-2088636283, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:55:29Z, description=, dns_domain=, id=9eb5761d-94a8-4798-bed6-a9e5cf6518df, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-356630163, port_security_enabled=True, project_id=2ac6a6009ea84eb99f60bd242e459002, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=63602, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1030, status=ACTIVE, subnets=['4a1e43ba-8e60-4948-9b13-76ae1b0a4af9'], tags=[], tenant_id=2ac6a6009ea84eb99f60bd242e459002, updated_at=2026-02-23T09:55:30Z, vlan_transparent=None, network_id=9eb5761d-94a8-4798-bed6-a9e5cf6518df, port_security_enabled=True, project_id=2ac6a6009ea84eb99f60bd242e459002, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['917bfa8c-752a-4a55-9acc-5ce6144207b4'], standard_attr_id=1068, status=DOWN, tags=[], tenant_id=2ac6a6009ea84eb99f60bd242e459002, updated_at=2026-02-23T09:55:35Z on network 9eb5761d-94a8-4798-bed6-a9e5cf6518df#033[00m Feb 23 04:55:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:36.026 263679 INFO neutron.agent.linux.ip_lib [None req-dd81be0b-3b1d-4122-8f91-a4d3ff71e5ff - - - - - -] Device tap95aa7aa1-bc cannot be used as it has no MAC address#033[00m Feb 23 04:55:36 localhost nova_compute[280321]: 2026-02-23 09:55:36.092 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:36 localhost kernel: device tap95aa7aa1-bc entered promiscuous mode Feb 23 04:55:36 localhost NetworkManager[5987]: [1771840536.1003] manager: (tap95aa7aa1-bc): new Generic device (/org/freedesktop/NetworkManager/Devices/28) Feb 23 04:55:36 localhost nova_compute[280321]: 2026-02-23 09:55:36.100 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:36 localhost ovn_controller[155966]: 2026-02-23T09:55:36Z|00107|binding|INFO|Claiming lport 95aa7aa1-bc59-4f0a-80a0-f4ca724aa1e0 for this chassis. Feb 23 04:55:36 localhost ovn_controller[155966]: 2026-02-23T09:55:36Z|00108|binding|INFO|95aa7aa1-bc59-4f0a-80a0-f4ca724aa1e0: Claiming unknown Feb 23 04:55:36 localhost systemd-udevd[310634]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.117 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-19d81a54-e14d-41c5-8e99-c5c9107d88df', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d81a54-e14d-41c5-8e99-c5c9107d88df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c50f2ae0a2f4cdd8225f6794547909b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b34df3e-70fe-48d7-b5b8-ba7a966b623e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=95aa7aa1-bc59-4f0a-80a0-f4ca724aa1e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.122 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 95aa7aa1-bc59-4f0a-80a0-f4ca724aa1e0 in datapath 19d81a54-e14d-41c5-8e99-c5c9107d88df bound to our chassis#033[00m Feb 23 04:55:36 localhost journal[229268]: ethtool ioctl error on tap95aa7aa1-bc: No such device Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.126 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 19d81a54-e14d-41c5-8e99-c5c9107d88df or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:36 localhost journal[229268]: ethtool ioctl error on tap95aa7aa1-bc: No such device Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.128 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4e25e1-4aa0-4758-a1e3-10c1b48fd4aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:36 localhost journal[229268]: ethtool ioctl error on tap95aa7aa1-bc: No such device Feb 23 04:55:36 localhost journal[229268]: ethtool ioctl error on tap95aa7aa1-bc: No such device Feb 23 04:55:36 localhost journal[229268]: ethtool ioctl error on tap95aa7aa1-bc: No such device Feb 23 04:55:36 localhost ovn_controller[155966]: 2026-02-23T09:55:36Z|00109|binding|INFO|Setting lport 95aa7aa1-bc59-4f0a-80a0-f4ca724aa1e0 ovn-installed in OVS Feb 23 04:55:36 localhost ovn_controller[155966]: 2026-02-23T09:55:36Z|00110|binding|INFO|Setting lport 95aa7aa1-bc59-4f0a-80a0-f4ca724aa1e0 up in Southbound Feb 23 04:55:36 localhost journal[229268]: ethtool ioctl error on tap95aa7aa1-bc: No such device Feb 23 04:55:36 localhost nova_compute[280321]: 2026-02-23 09:55:36.146 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:36 localhost journal[229268]: ethtool ioctl error on tap95aa7aa1-bc: No such device Feb 23 04:55:36 localhost journal[229268]: ethtool ioctl error on tap95aa7aa1-bc: No such device Feb 23 04:55:36 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e110 e110: 6 total, 6 up, 6 in Feb 23 04:55:36 localhost nova_compute[280321]: 2026-02-23 09:55:36.204 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:36 localhost dnsmasq[310573]: read /var/lib/neutron/dhcp/9eb5761d-94a8-4798-bed6-a9e5cf6518df/addn_hosts - 1 addresses Feb 23 04:55:36 localhost dnsmasq-dhcp[310573]: read /var/lib/neutron/dhcp/9eb5761d-94a8-4798-bed6-a9e5cf6518df/host Feb 23 04:55:36 localhost dnsmasq-dhcp[310573]: read /var/lib/neutron/dhcp/9eb5761d-94a8-4798-bed6-a9e5cf6518df/opts Feb 23 04:55:36 localhost podman[310650]: 2026-02-23 09:55:36.22512881 +0000 UTC m=+0.069780275 container kill 9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9eb5761d-94a8-4798-bed6-a9e5cf6518df, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.356 161941 DEBUG eventlet.wsgi.server [-] (161941) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.359 161941 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015 Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: Accept: */*#015 Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: Connection: close#015 Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: Content-Type: text/plain#015 Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: Host: 169.254.169.254#015 Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: User-Agent: curl/7.84.0#015 Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: X-Forwarded-For: 10.100.0.5#015 Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: X-Ovn-Network-Id: 55691124-ab57-4829-87d9-12148e1fa008 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Feb 23 04:55:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:36.481 263679 INFO neutron.agent.dhcp.agent [None req-e9e6e0b9-c4f3-46fc-aa0d-07f8a4998605 - - - - - -] DHCP configuration for ports {'adb72f75-7429-4a32-becc-fdcd334c7c29'} is completed#033[00m Feb 23 04:55:36 localhost ovn_controller[155966]: 2026-02-23T09:55:36Z|00111|binding|INFO|Removing iface tap95aa7aa1-bc ovn-installed in OVS Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.721 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 8a6572fb-1fe7-4499-a95a-452aa4c33c43 with type ""#033[00m Feb 23 04:55:36 localhost ovn_controller[155966]: 2026-02-23T09:55:36Z|00112|binding|INFO|Removing lport 95aa7aa1-bc59-4f0a-80a0-f4ca724aa1e0 ovn-installed in OVS Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.723 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-19d81a54-e14d-41c5-8e99-c5c9107d88df', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d81a54-e14d-41c5-8e99-c5c9107d88df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c50f2ae0a2f4cdd8225f6794547909b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b34df3e-70fe-48d7-b5b8-ba7a966b623e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=95aa7aa1-bc59-4f0a-80a0-f4ca724aa1e0) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:36 localhost nova_compute[280321]: 2026-02-23 09:55:36.723 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.726 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 95aa7aa1-bc59-4f0a-80a0-f4ca724aa1e0 in datapath 19d81a54-e14d-41c5-8e99-c5c9107d88df unbound from our chassis#033[00m Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.728 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 19d81a54-e14d-41c5-8e99-c5c9107d88df or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:36 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:36.729 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[387d35a0-10a4-48b7-bfa3-5904bf6c4476]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:36 localhost nova_compute[280321]: 2026-02-23 09:55:36.733 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:36 localhost podman[310725]: Feb 23 04:55:36 localhost podman[310725]: 2026-02-23 09:55:36.982843931 +0000 UTC m=+0.096299314 container create 575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19d81a54-e14d-41c5-8e99-c5c9107d88df, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 04:55:37 localhost systemd[1]: Started libpod-conmon-575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0.scope. Feb 23 04:55:37 localhost podman[310725]: 2026-02-23 09:55:36.937714932 +0000 UTC m=+0.051170355 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:55:37 localhost systemd[1]: Started libcrun container. Feb 23 04:55:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4d9f6941369350dcdb8dbf05ae9dd1699fb509a9202daeb8c62e1ec3e88b4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:37 localhost podman[310725]: 2026-02-23 09:55:37.061684511 +0000 UTC m=+0.175139894 container init 575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19d81a54-e14d-41c5-8e99-c5c9107d88df, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 04:55:37 localhost podman[310725]: 2026-02-23 09:55:37.070521281 +0000 UTC m=+0.183976654 container start 575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19d81a54-e14d-41c5-8e99-c5c9107d88df, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:55:37 localhost dnsmasq[310743]: started, version 2.85 cachesize 150 Feb 23 04:55:37 localhost dnsmasq[310743]: DNS service limited to local subnets Feb 23 04:55:37 localhost dnsmasq[310743]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:55:37 localhost dnsmasq[310743]: warning: no upstream servers configured Feb 23 04:55:37 localhost dnsmasq-dhcp[310743]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 04:55:37 localhost dnsmasq[310743]: read /var/lib/neutron/dhcp/19d81a54-e14d-41c5-8e99-c5c9107d88df/addn_hosts - 0 addresses Feb 23 04:55:37 localhost dnsmasq-dhcp[310743]: read /var/lib/neutron/dhcp/19d81a54-e14d-41c5-8e99-c5c9107d88df/host Feb 23 04:55:37 localhost dnsmasq-dhcp[310743]: read /var/lib/neutron/dhcp/19d81a54-e14d-41c5-8e99-c5c9107d88df/opts Feb 23 04:55:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 225 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 513 KiB/s rd, 803 KiB/s wr, 183 op/s Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.201 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:37 localhost kernel: device tap95aa7aa1-bc left promiscuous mode Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.215 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.233 263679 INFO neutron.agent.dhcp.agent [None req-8bd51272-4e9b-4d4c-9682-02805efda707 - - - - - -] DHCP configuration for ports {'f7e6ba6e-6d4b-46e5-8a21-34e53bcc8467'} is completed#033[00m Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.486 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:37 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:37.623 161941 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Feb 23 04:55:37 localhost haproxy-metadata-proxy-55691124-ab57-4829-87d9-12148e1fa008[310098]: 10.100.0.5:39318 [23/Feb/2026:09:55:36.355] listener listener/metadata 0/0/0/1269/1269 200 1653 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1" Feb 23 04:55:37 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:37.624 161941 INFO eventlet.wsgi.server [-] 10.100.0.5, "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200 len: 1669 time: 1.2660563#033[00m Feb 23 04:55:37 localhost dnsmasq[310743]: read /var/lib/neutron/dhcp/19d81a54-e14d-41c5-8e99-c5c9107d88df/addn_hosts - 0 addresses Feb 23 04:55:37 localhost dnsmasq-dhcp[310743]: read /var/lib/neutron/dhcp/19d81a54-e14d-41c5-8e99-c5c9107d88df/host Feb 23 04:55:37 localhost dnsmasq-dhcp[310743]: read /var/lib/neutron/dhcp/19d81a54-e14d-41c5-8e99-c5c9107d88df/opts Feb 23 04:55:37 localhost podman[310764]: 2026-02-23 09:55:37.778373429 +0000 UTC m=+0.057451377 container kill 575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19d81a54-e14d-41c5-8e99-c5c9107d88df, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.783 280325 DEBUG oslo_concurrency.lockutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Acquiring lock "85a9c2c0-3a8d-44ce-954f-e106841e2068" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.783 280325 DEBUG oslo_concurrency.lockutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.783 280325 DEBUG oslo_concurrency.lockutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Acquiring lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.784 280325 DEBUG oslo_concurrency.lockutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.784 280325 DEBUG oslo_concurrency.lockutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:37 localhost sshd[310776]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.785 280325 INFO nova.compute.manager [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Terminating instance#033[00m Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.786 280325 DEBUG nova.compute.manager [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 19d81a54-e14d-41c5-8e99-c5c9107d88df.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap95aa7aa1-bc not found in namespace qdhcp-19d81a54-e14d-41c5-8e99-c5c9107d88df. Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent return fut.result() Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent return self.__get_result() Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent raise self._exception Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap95aa7aa1-bc not found in namespace qdhcp-19d81a54-e14d-41c5-8e99-c5c9107d88df. Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.808 263679 ERROR neutron.agent.dhcp.agent #033[00m Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.816 263679 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Feb 23 04:55:37 localhost kernel: device tapd9173d59-2a left promiscuous mode Feb 23 04:55:37 localhost NetworkManager[5987]: [1771840537.8519] device (tapd9173d59-2a): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Feb 23 04:55:37 localhost ovn_controller[155966]: 2026-02-23T09:55:37Z|00113|binding|INFO|Releasing lport d9173d59-2a74-47ac-9a53-29ece647303c from this chassis (sb_readonly=0) Feb 23 04:55:37 localhost ovn_controller[155966]: 2026-02-23T09:55:37Z|00114|binding|INFO|Setting lport d9173d59-2a74-47ac-9a53-29ece647303c down in Southbound Feb 23 04:55:37 localhost ovn_controller[155966]: 2026-02-23T09:55:37Z|00115|binding|INFO|Removing iface tapd9173d59-2a ovn-installed in OVS Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.865 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:37 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:37.873 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:ba:fc 10.100.0.5'], port_security=['fa:16:3e:4b:ba:fc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85a9c2c0-3a8d-44ce-954f-e106841e2068', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55691124-ab57-4829-87d9-12148e1fa008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f67087411544c55a9225236eb297b90', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd03de417-eb2e-47e8-ad59-eae56add5dd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c755659-d501-4122-bfeb-a7f481d4a11a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=d9173d59-2a74-47ac-9a53-29ece647303c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:37 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:37.875 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d9173d59-2a74-47ac-9a53-29ece647303c in datapath 55691124-ab57-4829-87d9-12148e1fa008 unbound from our chassis#033[00m Feb 23 04:55:37 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:37.880 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55691124-ab57-4829-87d9-12148e1fa008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:55:37 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:37.881 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[7da2ef5e-054a-4b49-8757-2b34330ce62b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:37 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:37.882 161842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55691124-ab57-4829-87d9-12148e1fa008 namespace which is not needed anymore#033[00m Feb 23 04:55:37 localhost nova_compute[280321]: 2026-02-23 09:55:37.882 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:37 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000a.scope: Deactivated successfully. Feb 23 04:55:37 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000a.scope: Consumed 11.444s CPU time. Feb 23 04:55:37 localhost systemd-machined[205673]: Machine qemu-3-instance-0000000a terminated. Feb 23 04:55:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e111 e111: 6 total, 6 up, 6 in Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.988 263679 INFO neutron.agent.dhcp.agent [None req-20fd52a2-7161-4890-a0c0-7c4ab9e9d17f - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.989 263679 INFO neutron.agent.dhcp.agent [-] Starting network 19d81a54-e14d-41c5-8e99-c5c9107d88df dhcp configuration#033[00m Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.990 263679 INFO neutron.agent.dhcp.agent [-] Finished network 19d81a54-e14d-41c5-8e99-c5c9107d88df dhcp configuration#033[00m Feb 23 04:55:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:37.991 263679 INFO neutron.agent.dhcp.agent [None req-20fd52a2-7161-4890-a0c0-7c4ab9e9d17f - - - - - -] Synchronizing state complete#033[00m Feb 23 04:55:38 localhost NetworkManager[5987]: [1771840538.0050] manager: (tapd9173d59-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/29) Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.009 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.017 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.023 280325 INFO nova.virt.libvirt.driver [-] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Instance destroyed successfully.#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.024 280325 DEBUG nova.objects.instance [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lazy-loading 'resources' on Instance uuid 85a9c2c0-3a8d-44ce-954f-e106841e2068 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.041 280325 DEBUG nova.virt.libvirt.vif [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-23T09:55:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(6),hidden=False,host='np0005626465.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF2rlDQqpE8gN/S5lFAyRLGYjpcIOcewybNn5UWV3V3SEazahuCHhiJUvS7fIbH3nnsHB7jxCAoFyueFoR4fMctfTxp9VYvlIgSdOtHAyy+XSsU2Yw/KKnx0uI2GDyViFg==',key_name='tempest-keypair-15325135',keypairs=,launch_index=0,launched_at=2026-02-23T09:55:17Z,launched_on='np0005626465.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005626465.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='7f67087411544c55a9225236eb297b90',ramdisk_id='',reservation_id='r-5x952hk6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-1720923751',owner_user_name='tempest-ServersV294TestFqdnHostnames-1720923751-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2026-02-23T09:55:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2b38675f57640819bf191ad8152e7cb',uuid=85a9c2c0-3a8d-44ce-954f-e106841e2068,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.041 280325 DEBUG nova.network.os_vif_util [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Converting VIF {"id": "d9173d59-2a74-47ac-9a53-29ece647303c", "address": "fa:16:3e:4b:ba:fc", "network": {"id": "55691124-ab57-4829-87d9-12148e1fa008", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1468165471-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "7f67087411544c55a9225236eb297b90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9173d59-2a", "ovs_interfaceid": "d9173d59-2a74-47ac-9a53-29ece647303c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.043 280325 DEBUG nova.network.os_vif_util [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:ba:fc,bridge_name='br-int',has_traffic_filtering=True,id=d9173d59-2a74-47ac-9a53-29ece647303c,network=Network(55691124-ab57-4829-87d9-12148e1fa008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9173d59-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.043 280325 DEBUG os_vif [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:ba:fc,bridge_name='br-int',has_traffic_filtering=True,id=d9173d59-2a74-47ac-9a53-29ece647303c,network=Network(55691124-ab57-4829-87d9-12148e1fa008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9173d59-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.047 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.047 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9173d59-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.049 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.051 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.054 280325 INFO os_vif [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:ba:fc,bridge_name='br-int',has_traffic_filtering=True,id=d9173d59-2a74-47ac-9a53-29ece647303c,network=Network(55691124-ab57-4829-87d9-12148e1fa008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9173d59-2a')#033[00m Feb 23 04:55:38 localhost neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008[310088]: [NOTICE] (310094) : haproxy version is 2.8.14-c23fe91 Feb 23 04:55:38 localhost neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008[310088]: [NOTICE] (310094) : path to executable is /usr/sbin/haproxy Feb 23 04:55:38 localhost neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008[310088]: [ALERT] (310094) : Current worker (310098) exited with code 143 (Terminated) Feb 23 04:55:38 localhost neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008[310088]: [WARNING] (310094) : All workers exited. Exiting... (0) Feb 23 04:55:38 localhost systemd[1]: tmp-crun.eLkmgc.mount: Deactivated successfully. Feb 23 04:55:38 localhost systemd[1]: libpod-75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172.scope: Deactivated successfully. Feb 23 04:55:38 localhost podman[310799]: 2026-02-23 09:55:38.091010706 +0000 UTC m=+0.093890452 container died 75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 04:55:38 localhost ovn_controller[155966]: 2026-02-23T09:55:38Z|00116|binding|INFO|Releasing lport e231ecd8-0ed4-4a64-9851-e1b9a6d545a2 from this chassis (sb_readonly=0) Feb 23 04:55:38 localhost podman[310799]: 2026-02-23 09:55:38.128283575 +0000 UTC m=+0.131163291 container cleanup 75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2) Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.142 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:38 localhost podman[310836]: 2026-02-23 09:55:38.179933644 +0000 UTC m=+0.080991157 container cleanup 75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:55:38 localhost systemd[1]: libpod-conmon-75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172.scope: Deactivated successfully. Feb 23 04:55:38 localhost ovn_controller[155966]: 2026-02-23T09:55:38Z|00117|binding|INFO|Releasing lport e231ecd8-0ed4-4a64-9851-e1b9a6d545a2 from this chassis (sb_readonly=0) Feb 23 04:55:38 localhost podman[310852]: 2026-02-23 09:55:38.217722669 +0000 UTC m=+0.066548535 container remove 75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.220 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:38.221 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[588bfc43-d574-4d47-9e71-d3f2c7fafd0c]: (4, ('Mon Feb 23 09:55:37 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008 (75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172)\n75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172\nMon Feb 23 09:55:38 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-55691124-ab57-4829-87d9-12148e1fa008 (75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172)\n75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:38.222 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[0009f2fc-9e92-4096-b5ab-5836b8144448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:38.223 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55691124-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.224 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:38 localhost kernel: device tap55691124-a0 left promiscuous mode Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.226 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:38.228 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f36567-0fed-43cc-b8bc-dbd5d5f8856f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.231 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:38.246 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[c0eb464a-6034-47a5-83e3-4e5dd4befc15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:38.247 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[e8987c8c-2ae7-4e59-ba4a-f910925fa6a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:38.257 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe2fee7-b061-4344-a18e-c492f0a33bff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1194769, 'reachable_time': 32421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310888, 'error': None, 'target': 'ovnmeta-55691124-ab57-4829-87d9-12148e1fa008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:38.258 161946 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55691124-ab57-4829-87d9-12148e1fa008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 23 04:55:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:38.259 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[b88d145e-07e4-4c31-9178-30b6d7c622e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:38 localhost dnsmasq[310743]: exiting on receipt of SIGTERM Feb 23 04:55:38 localhost podman[310890]: 2026-02-23 09:55:38.30966501 +0000 UTC m=+0.036158647 container kill 575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19d81a54-e14d-41c5-8e99-c5c9107d88df, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:55:38 localhost systemd[1]: libpod-575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0.scope: Deactivated successfully. Feb 23 04:55:38 localhost podman[310905]: 2026-02-23 09:55:38.349667652 +0000 UTC m=+0.032481524 container died 575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19d81a54-e14d-41c5-8e99-c5c9107d88df, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:55:38 localhost podman[310905]: 2026-02-23 09:55:38.376634277 +0000 UTC m=+0.059448139 container cleanup 575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19d81a54-e14d-41c5-8e99-c5c9107d88df, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:55:38 localhost systemd[1]: libpod-conmon-575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0.scope: Deactivated successfully. Feb 23 04:55:38 localhost podman[310912]: 2026-02-23 09:55:38.395585836 +0000 UTC m=+0.061011306 container remove 575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19d81a54-e14d-41c5-8e99-c5c9107d88df, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216) Feb 23 04:55:38 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:38.727 2 INFO neutron.agent.securitygroups_rpc [req-2c9e84fe-f5f5-4169-b610-b000c50ec955 req-5f426135-7d9d-4897-a8f1-4e578256ef9c b2cb9c14c50346658af8c86574d3a360 91db788359a945be921785f05bf8c883 - - default default] Security group rule updated ['81d638c1-b5b2-4310-a6ad-c12f8ffa8182']#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.731 280325 INFO nova.virt.libvirt.driver [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Deleting instance files /var/lib/nova/instances/85a9c2c0-3a8d-44ce-954f-e106841e2068_del#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.732 280325 INFO nova.virt.libvirt.driver [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Deletion of /var/lib/nova/instances/85a9c2c0-3a8d-44ce-954f-e106841e2068_del complete#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.818 280325 INFO nova.compute.manager [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.818 280325 DEBUG oslo.service.loopingcall [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.819 280325 DEBUG nova.compute.manager [-] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Feb 23 04:55:38 localhost nova_compute[280321]: 2026-02-23 09:55:38.819 280325 DEBUG nova.network.neutron [-] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Feb 23 04:55:38 localhost systemd[1]: var-lib-containers-storage-overlay-2b4d9f6941369350dcdb8dbf05ae9dd1699fb509a9202daeb8c62e1ec3e88b4e-merged.mount: Deactivated successfully. Feb 23 04:55:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-575c629dc3943c926a1585dbc55387c874c888ce628788c0bd6519e608de5ac0-userdata-shm.mount: Deactivated successfully. Feb 23 04:55:38 localhost systemd[1]: run-netns-qdhcp\x2d19d81a54\x2de14d\x2d41c5\x2d8e99\x2dc5c9107d88df.mount: Deactivated successfully. Feb 23 04:55:38 localhost systemd[1]: var-lib-containers-storage-overlay-c27e01a5fc85525d1df6e928254fe7dde10ae3df2ce82cf3611335bfd6b1a2ca-merged.mount: Deactivated successfully. Feb 23 04:55:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75c61f3d30fc73e26e96340d3a6dc4f5ce809671e412f02b6b9f91882d164172-userdata-shm.mount: Deactivated successfully. Feb 23 04:55:38 localhost systemd[1]: run-netns-ovnmeta\x2d55691124\x2dab57\x2d4829\x2d87d9\x2d12148e1fa008.mount: Deactivated successfully. Feb 23 04:55:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 225 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 82 KiB/s rd, 30 KiB/s wr, 111 op/s Feb 23 04:55:39 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:39.718 2 INFO neutron.agent.securitygroups_rpc [req-2f94c104-6551-4f26-9e12-afea6d919b19 req-a08f01d8-ef48-4fc3-bf4b-c9f57e9a499f b2cb9c14c50346658af8c86574d3a360 91db788359a945be921785f05bf8c883 - - default default] Security group rule updated ['81fcdb25-34e2-4e01-b6c6-c95398c61f96']#033[00m Feb 23 04:55:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e112 e112: 6 total, 6 up, 6 in Feb 23 04:55:40 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:40.228 2 INFO neutron.agent.securitygroups_rpc [req-740eea5f-ed9c-433b-ad37-bd212433a1f7 req-c27bf517-3a13-4ed5-83bb-bf5a421e5259 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Security group member updated ['d03de417-eb2e-47e8-ad59-eae56add5dd4']#033[00m Feb 23 04:55:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:40 localhost nova_compute[280321]: 2026-02-23 09:55:40.655 280325 DEBUG nova.network.neutron [-] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:55:40 localhost nova_compute[280321]: 2026-02-23 09:55:40.681 280325 INFO nova.compute.manager [-] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Took 1.86 seconds to deallocate network for instance.#033[00m Feb 23 04:55:40 localhost nova_compute[280321]: 2026-02-23 09:55:40.736 280325 DEBUG oslo_concurrency.lockutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:40 localhost nova_compute[280321]: 2026-02-23 09:55:40.737 280325 DEBUG oslo_concurrency.lockutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:40 localhost nova_compute[280321]: 2026-02-23 09:55:40.791 280325 DEBUG nova.compute.manager [req-5d01ce3a-a1f6-43b8-be1d-9e3e9ce285ed req-d557a8a7-91c4-40c5-895b-9efe2521b2a2 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Received event network-vif-deleted-d9173d59-2a74-47ac-9a53-29ece647303c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:55:40 localhost nova_compute[280321]: 2026-02-23 09:55:40.793 280325 DEBUG oslo_concurrency.processutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:55:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:55:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:55:41 localhost podman[310940]: 2026-02-23 09:55:41.012022627 +0000 UTC m=+0.085767562 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:55:41 localhost podman[310940]: 2026-02-23 09:55:41.025823209 +0000 UTC m=+0.099568134 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:55:41 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:55:41 localhost podman[310938]: 2026-02-23 09:55:41.080385938 +0000 UTC m=+0.154397861 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:55:41 localhost podman[310938]: 2026-02-23 09:55:41.088848636 +0000 UTC m=+0.162860549 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:55:41 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:55:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 193 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 133 KiB/s rd, 35 KiB/s wr, 180 op/s Feb 23 04:55:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e113 e113: 6 total, 6 up, 6 in Feb 23 04:55:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:55:41 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2157903728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:55:41 localhost nova_compute[280321]: 2026-02-23 09:55:41.277 280325 DEBUG oslo_concurrency.processutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:55:41 localhost nova_compute[280321]: 2026-02-23 09:55:41.285 280325 DEBUG nova.compute.provider_tree [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:55:41 localhost nova_compute[280321]: 2026-02-23 09:55:41.303 280325 DEBUG nova.scheduler.client.report [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:55:41 localhost nova_compute[280321]: 2026-02-23 09:55:41.343 280325 DEBUG oslo_concurrency.lockutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:41 localhost nova_compute[280321]: 2026-02-23 09:55:41.383 280325 INFO nova.scheduler.client.report [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Deleted allocations for instance 85a9c2c0-3a8d-44ce-954f-e106841e2068#033[00m Feb 23 04:55:41 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:41.454 2 INFO neutron.agent.securitygroups_rpc [req-e8f689e1-cc88-4c2b-896f-8a7df5cfb707 req-464689d7-a67a-4f0a-8a5e-818033ca8861 b2cb9c14c50346658af8c86574d3a360 91db788359a945be921785f05bf8c883 - - default default] Security group rule updated ['dafd3ce0-31be-4a51-acc9-61744d386010']#033[00m Feb 23 04:55:41 localhost nova_compute[280321]: 2026-02-23 09:55:41.466 280325 DEBUG oslo_concurrency.lockutils [None req-740eea5f-ed9c-433b-ad37-bd212433a1f7 c2b38675f57640819bf191ad8152e7cb 7f67087411544c55a9225236eb297b90 - - default default] Lock "85a9c2c0-3a8d-44ce-954f-e106841e2068" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 3.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e114 e114: 6 total, 6 up, 6 in Feb 23 04:55:42 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:42.482 2 INFO neutron.agent.securitygroups_rpc [req-1b7ecdc5-863d-4ab9-b539-c81bbfea1261 req-49751f6e-ab71-482b-8a57-c1aca7f7d635 b2cb9c14c50346658af8c86574d3a360 91db788359a945be921785f05bf8c883 - - default default] Security group rule updated ['315ff60a-a295-4b8a-bcc8-fd8b624c828e']#033[00m Feb 23 04:55:42 localhost nova_compute[280321]: 2026-02-23 09:55:42.511 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:42 localhost podman[241086]: time="2026-02-23T09:55:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:55:42 localhost podman[241086]: @ - - [23/Feb/2026:09:55:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159540 "" "Go-http-client/1.1" Feb 23 04:55:42 localhost podman[241086]: @ - - [23/Feb/2026:09:55:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19235 "" "Go-http-client/1.1" Feb 23 04:55:42 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:42.802 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005626466.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:55:28Z, description=, device_id=66c5eac8-f6f4-40ae-b09f-54e200c103b8, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-livemigrationtest-server-1016549855, extra_dhcp_opts=[], fixed_ips=[], id=622abaa1-264f-4476-8fb5-acbc0c816b3f, ip_allocation=immediate, mac_address=fa:16:3e:77:96:92, name=tempest-parent-967314267, network_id=488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, port_security_enabled=True, project_id=2ac6a6009ea84eb99f60bd242e459002, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['917bfa8c-752a-4a55-9acc-5ce6144207b4'], standard_attr_id=1024, status=DOWN, tags=[], tenant_id=2ac6a6009ea84eb99f60bd242e459002, trunk_details=sub_ports=[], trunk_id=ed92871f-12d0-4d3c-b546-02c80c8c3d6e, updated_at=2026-02-23T09:55:42Z on network 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c#033[00m Feb 23 04:55:42 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e115 e115: 6 total, 6 up, 6 in Feb 23 04:55:43 localhost dnsmasq[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/addn_hosts - 2 addresses Feb 23 04:55:43 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/host Feb 23 04:55:43 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/opts Feb 23 04:55:43 localhost podman[311011]: 2026-02-23 09:55:43.038385331 +0000 UTC m=+0.060804550 container kill d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:55:43 localhost nova_compute[280321]: 2026-02-23 09:55:43.050 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 161 MiB data, 753 MiB used, 41 GiB / 42 GiB avail; 191 KiB/s rd, 2.2 MiB/s wr, 278 op/s Feb 23 04:55:43 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:43.360 263679 INFO neutron.agent.dhcp.agent [None req-1d66c89c-ad43-488b-a6cd-f1ca5e2f24ed - - - - - -] DHCP configuration for ports {'622abaa1-264f-4476-8fb5-acbc0c816b3f'} is completed#033[00m Feb 23 04:55:44 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e116 e116: 6 total, 6 up, 6 in Feb 23 04:55:44 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:44.357 2 INFO neutron.agent.securitygroups_rpc [req-66b36902-2e90-44c7-97eb-062607295697 req-4da6358e-447d-4c62-9578-e6f0834d0e3a b2cb9c14c50346658af8c86574d3a360 91db788359a945be921785f05bf8c883 - - default default] Security group rule updated ['6e64f6d6-976d-4cdf-bc43-87f175a49821']#033[00m Feb 23 04:55:44 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:44.711 2 INFO neutron.agent.securitygroups_rpc [req-06a39d77-6c9e-4d1d-991d-0c622d1b8570 req-e582e36b-a1ce-42ef-aae5-7f3dd95ea0e7 b2cb9c14c50346658af8c86574d3a360 91db788359a945be921785f05bf8c883 - - default default] Security group rule updated ['6e64f6d6-976d-4cdf-bc43-87f175a49821']#033[00m Feb 23 04:55:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e117 e117: 6 total, 6 up, 6 in Feb 23 04:55:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 161 MiB data, 753 MiB used, 41 GiB / 42 GiB avail; 119 KiB/s rd, 2.2 MiB/s wr, 182 op/s Feb 23 04:55:45 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:45.353 2 INFO neutron.agent.securitygroups_rpc [req-b47c5ca0-09bc-4b25-97b3-180f5e0f18ac req-fd279825-b764-4bf6-b8d3-f7613a7508ce b2cb9c14c50346658af8c86574d3a360 91db788359a945be921785f05bf8c883 - - default default] Security group rule updated ['6e64f6d6-976d-4cdf-bc43-87f175a49821']#033[00m Feb 23 04:55:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:55:46 localhost podman[311033]: 2026-02-23 09:55:46.005987686 +0000 UTC m=+0.080781130 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:55:46 localhost podman[311033]: 2026-02-23 09:55:46.020864571 +0000 UTC m=+0.095658005 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:55:46 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:55:46 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e118 e118: 6 total, 6 up, 6 in Feb 23 04:55:46 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e119 e119: 6 total, 6 up, 6 in Feb 23 04:55:47 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:47.028 2 INFO neutron.agent.securitygroups_rpc [None req-f20a6c5c-ae1e-41e5-8a0b-e142fc8dd656 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:55:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 192 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 128 KiB/s rd, 3.2 MiB/s wr, 188 op/s Feb 23 04:55:47 localhost nova_compute[280321]: 2026-02-23 09:55:47.545 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:47 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:47.646 263679 INFO neutron.agent.linux.ip_lib [None req-0e6f00e0-24f6-40c6-8e20-de23accd7a1c - - - - - -] Device tap157a1334-1d cannot be used as it has no MAC address#033[00m Feb 23 04:55:47 localhost nova_compute[280321]: 2026-02-23 09:55:47.672 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:47 localhost kernel: device tap157a1334-1d entered promiscuous mode Feb 23 04:55:47 localhost NetworkManager[5987]: [1771840547.6817] manager: (tap157a1334-1d): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Feb 23 04:55:47 localhost ovn_controller[155966]: 2026-02-23T09:55:47Z|00118|binding|INFO|Claiming lport 157a1334-1dbe-4d4e-8527-c2eb20baf7f3 for this chassis. Feb 23 04:55:47 localhost ovn_controller[155966]: 2026-02-23T09:55:47Z|00119|binding|INFO|157a1334-1dbe-4d4e-8527-c2eb20baf7f3: Claiming unknown Feb 23 04:55:47 localhost systemd-udevd[311067]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:55:47 localhost nova_compute[280321]: 2026-02-23 09:55:47.684 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:47.695 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e426fb3b-bff4-459e-af68-5cc1456aba74', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e426fb3b-bff4-459e-af68-5cc1456aba74', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c50f2ae0a2f4cdd8225f6794547909b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4958bc92-f5b2-48de-b547-bd27c2426718, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=157a1334-1dbe-4d4e-8527-c2eb20baf7f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:47.697 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 157a1334-1dbe-4d4e-8527-c2eb20baf7f3 in datapath e426fb3b-bff4-459e-af68-5cc1456aba74 bound to our chassis#033[00m Feb 23 04:55:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:47.700 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e426fb3b-bff4-459e-af68-5cc1456aba74 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:47.701 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[a1535a75-4ce8-4e80-b599-1bda19e584a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:47 localhost journal[229268]: ethtool ioctl error on tap157a1334-1d: No such device Feb 23 04:55:47 localhost ovn_controller[155966]: 2026-02-23T09:55:47Z|00120|binding|INFO|Setting lport 157a1334-1dbe-4d4e-8527-c2eb20baf7f3 ovn-installed in OVS Feb 23 04:55:47 localhost ovn_controller[155966]: 2026-02-23T09:55:47Z|00121|binding|INFO|Setting lport 157a1334-1dbe-4d4e-8527-c2eb20baf7f3 up in Southbound Feb 23 04:55:47 localhost nova_compute[280321]: 2026-02-23 09:55:47.727 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:47 localhost journal[229268]: ethtool ioctl error on tap157a1334-1d: No such device Feb 23 04:55:47 localhost journal[229268]: ethtool ioctl error on tap157a1334-1d: No such device Feb 23 04:55:47 localhost journal[229268]: ethtool ioctl error on tap157a1334-1d: No such device Feb 23 04:55:47 localhost journal[229268]: ethtool ioctl error on tap157a1334-1d: No such device Feb 23 04:55:47 localhost journal[229268]: ethtool ioctl error on tap157a1334-1d: No such device Feb 23 04:55:47 localhost journal[229268]: ethtool ioctl error on tap157a1334-1d: No such device Feb 23 04:55:47 localhost journal[229268]: ethtool ioctl error on tap157a1334-1d: No such device Feb 23 04:55:47 localhost nova_compute[280321]: 2026-02-23 09:55:47.768 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:47 localhost nova_compute[280321]: 2026-02-23 09:55:47.801 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:48 localhost nova_compute[280321]: 2026-02-23 09:55:48.052 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e120 e120: 6 total, 6 up, 6 in Feb 23 04:55:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:48.313 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:48.313 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:48.314 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:48 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:48.504 2 INFO neutron.agent.securitygroups_rpc [None req-1ad0cbd3-986c-404f-b323-25b4bb76d296 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:55:48 localhost podman[311138]: Feb 23 04:55:48 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:48.674 2 INFO neutron.agent.securitygroups_rpc [None req-1ad0cbd3-986c-404f-b323-25b4bb76d296 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:55:48 localhost podman[311138]: 2026-02-23 09:55:48.681798382 +0000 UTC m=+0.147784869 container create cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e426fb3b-bff4-459e-af68-5cc1456aba74, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, io.buildah.version=1.43.0) Feb 23 04:55:48 localhost podman[311138]: 2026-02-23 09:55:48.583512688 +0000 UTC m=+0.049499205 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:55:48 localhost systemd[1]: Started libpod-conmon-cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3.scope. Feb 23 04:55:48 localhost systemd[1]: tmp-crun.Ri36C1.mount: Deactivated successfully. Feb 23 04:55:48 localhost dnsmasq[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/addn_hosts - 0 addresses Feb 23 04:55:48 localhost dnsmasq-dhcp[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/host Feb 23 04:55:48 localhost dnsmasq-dhcp[310337]: read /var/lib/neutron/dhcp/52f0a190-d529-4e0c-bebe-cbd94ee3d830/opts Feb 23 04:55:48 localhost podman[311166]: 2026-02-23 09:55:48.738391002 +0000 UTC m=+0.073514448 container kill 2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-52f0a190-d529-4e0c-bebe-cbd94ee3d830, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0) Feb 23 04:55:48 localhost systemd[1]: Started libcrun container. Feb 23 04:55:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b071f659c9d8b31cd8149751ededc5cddcc39d7e659c4a8f5ba707941b0d36/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:48 localhost podman[311138]: 2026-02-23 09:55:48.767942616 +0000 UTC m=+0.233929063 container init cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e426fb3b-bff4-459e-af68-5cc1456aba74, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2) Feb 23 04:55:48 localhost podman[311138]: 2026-02-23 09:55:48.77625451 +0000 UTC m=+0.242240957 container start cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e426fb3b-bff4-459e-af68-5cc1456aba74, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:55:48 localhost dnsmasq[311185]: started, version 2.85 cachesize 150 Feb 23 04:55:48 localhost dnsmasq[311185]: DNS service limited to local subnets Feb 23 04:55:48 localhost dnsmasq[311185]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:55:48 localhost dnsmasq[311185]: warning: no upstream servers configured Feb 23 04:55:48 localhost dnsmasq-dhcp[311185]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 04:55:48 localhost dnsmasq[311185]: read /var/lib/neutron/dhcp/e426fb3b-bff4-459e-af68-5cc1456aba74/addn_hosts - 0 addresses Feb 23 04:55:48 localhost dnsmasq-dhcp[311185]: read /var/lib/neutron/dhcp/e426fb3b-bff4-459e-af68-5cc1456aba74/host Feb 23 04:55:48 localhost dnsmasq-dhcp[311185]: read /var/lib/neutron/dhcp/e426fb3b-bff4-459e-af68-5cc1456aba74/opts Feb 23 04:55:48 localhost nova_compute[280321]: 2026-02-23 09:55:48.991 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:48 localhost kernel: device tap4b58d478-6c left promiscuous mode Feb 23 04:55:48 localhost ovn_controller[155966]: 2026-02-23T09:55:48Z|00122|binding|INFO|Releasing lport 4b58d478-6cb3-4ffe-b17c-87fc95ae13e5 from this chassis (sb_readonly=0) Feb 23 04:55:48 localhost ovn_controller[155966]: 2026-02-23T09:55:48Z|00123|binding|INFO|Setting lport 4b58d478-6cb3-4ffe-b17c-87fc95ae13e5 down in Southbound Feb 23 04:55:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:49.000 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-52f0a190-d529-4e0c-bebe-cbd94ee3d830', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52f0a190-d529-4e0c-bebe-cbd94ee3d830', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91db788359a945be921785f05bf8c883', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83979536-9c99-48bc-800a-bfe92bf16112, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4b58d478-6cb3-4ffe-b17c-87fc95ae13e5) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:49.002 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 4b58d478-6cb3-4ffe-b17c-87fc95ae13e5 in datapath 52f0a190-d529-4e0c-bebe-cbd94ee3d830 unbound from our chassis#033[00m Feb 23 04:55:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:49.006 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52f0a190-d529-4e0c-bebe-cbd94ee3d830, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:55:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:49.007 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[68445459-4d3a-4239-bb60-1fe8bbe968d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:49.014 263679 INFO neutron.agent.dhcp.agent [None req-dca58292-50a6-45a8-be99-8c78cba4592f - - - - - -] DHCP configuration for ports {'53b4481a-7b5c-4787-99a3-806906d7f872'} is completed#033[00m Feb 23 04:55:49 localhost nova_compute[280321]: 2026-02-23 09:55:49.018 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 192 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 125 KiB/s rd, 3.1 MiB/s wr, 184 op/s Feb 23 04:55:49 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e121 e121: 6 total, 6 up, 6 in Feb 23 04:55:49 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:49.286 2 INFO neutron.agent.securitygroups_rpc [None req-2c61be2a-9b0c-4b49-b66c-1a782048b54f 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:55:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:49.929 263679 INFO neutron.agent.linux.ip_lib [None req-e14bcfa9-b09c-4523-9490-65ab0a4c6780 - - - - - -] Device tap58b57b0f-01 cannot be used as it has no MAC address#033[00m Feb 23 04:55:49 localhost nova_compute[280321]: 2026-02-23 09:55:49.951 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:49 localhost kernel: device tap58b57b0f-01 entered promiscuous mode Feb 23 04:55:49 localhost NetworkManager[5987]: [1771840549.9590] manager: (tap58b57b0f-01): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Feb 23 04:55:49 localhost ovn_controller[155966]: 2026-02-23T09:55:49Z|00124|binding|INFO|Claiming lport 58b57b0f-01e4-4ee4-bd2b-e2825522a446 for this chassis. Feb 23 04:55:49 localhost ovn_controller[155966]: 2026-02-23T09:55:49Z|00125|binding|INFO|58b57b0f-01e4-4ee4-bd2b-e2825522a446: Claiming unknown Feb 23 04:55:49 localhost nova_compute[280321]: 2026-02-23 09:55:49.961 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:49.971 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e1b10770-7724-41e6-ac5d-3e85f8e79115', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1b10770-7724-41e6-ac5d-3e85f8e79115', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c50f2ae0a2f4cdd8225f6794547909b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068ac30f-bf89-4857-ba59-79e68ef5b72f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=58b57b0f-01e4-4ee4-bd2b-e2825522a446) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:49.973 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 58b57b0f-01e4-4ee4-bd2b-e2825522a446 in datapath e1b10770-7724-41e6-ac5d-3e85f8e79115 bound to our chassis#033[00m Feb 23 04:55:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:49.975 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e1b10770-7724-41e6-ac5d-3e85f8e79115 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:49.976 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[9d28e0f6-eeb7-46e3-8b21-6e3d023b072e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:49 localhost ovn_controller[155966]: 2026-02-23T09:55:49Z|00126|binding|INFO|Setting lport 58b57b0f-01e4-4ee4-bd2b-e2825522a446 ovn-installed in OVS Feb 23 04:55:49 localhost ovn_controller[155966]: 2026-02-23T09:55:49Z|00127|binding|INFO|Setting lport 58b57b0f-01e4-4ee4-bd2b-e2825522a446 up in Southbound Feb 23 04:55:49 localhost nova_compute[280321]: 2026-02-23 09:55:49.998 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:50 localhost nova_compute[280321]: 2026-02-23 09:55:50.075 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e122 e122: 6 total, 6 up, 6 in Feb 23 04:55:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:50 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:50.431 2 INFO neutron.agent.securitygroups_rpc [None req-0bde3ec4-1f43-4d89-82d0-564202a4897c 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:55:50 localhost nova_compute[280321]: 2026-02-23 09:55:50.601 280325 DEBUG nova.virt.libvirt.driver [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Creating tmpfile /var/lib/nova/instances/tmpyfndvbe3 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Feb 23 04:55:50 localhost nova_compute[280321]: 2026-02-23 09:55:50.625 280325 DEBUG nova.compute.manager [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpyfndvbe3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Feb 23 04:55:50 localhost ovn_controller[155966]: 2026-02-23T09:55:50Z|00128|binding|INFO|Removing iface tap58b57b0f-01 ovn-installed in OVS Feb 23 04:55:50 localhost ovn_controller[155966]: 2026-02-23T09:55:50Z|00129|binding|INFO|Removing lport 58b57b0f-01e4-4ee4-bd2b-e2825522a446 ovn-installed in OVS Feb 23 04:55:50 localhost nova_compute[280321]: 2026-02-23 09:55:50.642 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:50 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:50.642 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 6724452a-7a72-4120-af00-c821481a8c41 with type ""#033[00m Feb 23 04:55:50 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:50.644 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e1b10770-7724-41e6-ac5d-3e85f8e79115', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1b10770-7724-41e6-ac5d-3e85f8e79115', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c50f2ae0a2f4cdd8225f6794547909b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068ac30f-bf89-4857-ba59-79e68ef5b72f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=58b57b0f-01e4-4ee4-bd2b-e2825522a446) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:50 localhost nova_compute[280321]: 2026-02-23 09:55:50.646 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:50 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:50.648 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 58b57b0f-01e4-4ee4-bd2b-e2825522a446 in datapath e1b10770-7724-41e6-ac5d-3e85f8e79115 unbound from our chassis#033[00m Feb 23 04:55:50 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:50.650 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e1b10770-7724-41e6-ac5d-3e85f8e79115 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:50 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:50.651 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[559ec3ee-6974-451f-b8a9-fb5511906458]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:50 localhost podman[311256]: Feb 23 04:55:50 localhost podman[311256]: 2026-02-23 09:55:50.873754008 +0000 UTC m=+0.085263188 container create a9a28face0f2660d5c7f3489a89def1ee3609eca506960d7e6aafb7cef112508 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1b10770-7724-41e6-ac5d-3e85f8e79115, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:55:50 localhost systemd[1]: Started libpod-conmon-a9a28face0f2660d5c7f3489a89def1ee3609eca506960d7e6aafb7cef112508.scope. Feb 23 04:55:50 localhost podman[311256]: 2026-02-23 09:55:50.833359143 +0000 UTC m=+0.044868353 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:55:50 localhost systemd[1]: tmp-crun.uiaomb.mount: Deactivated successfully. Feb 23 04:55:50 localhost nova_compute[280321]: 2026-02-23 09:55:50.949 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:50 localhost systemd[1]: Started libcrun container. Feb 23 04:55:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d15fa6c45487eaf9dfae15a6d4b08f12c20a97cdfbcdce5d6d21f89d71016014/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:50 localhost podman[311256]: 2026-02-23 09:55:50.98180687 +0000 UTC m=+0.193316050 container init a9a28face0f2660d5c7f3489a89def1ee3609eca506960d7e6aafb7cef112508 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1b10770-7724-41e6-ac5d-3e85f8e79115, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:55:50 localhost podman[311256]: 2026-02-23 09:55:50.990822296 +0000 UTC m=+0.202331466 container start a9a28face0f2660d5c7f3489a89def1ee3609eca506960d7e6aafb7cef112508 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1b10770-7724-41e6-ac5d-3e85f8e79115, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216) Feb 23 04:55:50 localhost dnsmasq[311275]: started, version 2.85 cachesize 150 Feb 23 04:55:50 localhost dnsmasq[311275]: DNS service limited to local subnets Feb 23 04:55:50 localhost dnsmasq[311275]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:55:50 localhost dnsmasq[311275]: warning: no upstream servers configured Feb 23 04:55:50 localhost dnsmasq-dhcp[311275]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 04:55:51 localhost dnsmasq[311275]: read /var/lib/neutron/dhcp/e1b10770-7724-41e6-ac5d-3e85f8e79115/addn_hosts - 0 addresses Feb 23 04:55:51 localhost dnsmasq-dhcp[311275]: read /var/lib/neutron/dhcp/e1b10770-7724-41e6-ac5d-3e85f8e79115/host Feb 23 04:55:51 localhost dnsmasq-dhcp[311275]: read /var/lib/neutron/dhcp/e1b10770-7724-41e6-ac5d-3e85f8e79115/opts Feb 23 04:55:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 192 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 38 KiB/s wr, 172 op/s Feb 23 04:55:51 localhost nova_compute[280321]: 2026-02-23 09:55:51.147 280325 DEBUG nova.compute.manager [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpyfndvbe3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='66c5eac8-f6f4-40ae-b09f-54e200c103b8',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Feb 23 04:55:51 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:51.154 263679 INFO neutron.agent.dhcp.agent [None req-9625a095-e731-4569-bb03-c5ce46bb4a5c - - - - - -] DHCP configuration for ports {'9284bd63-7c4c-42e3-aaca-1cab599dab67'} is completed#033[00m Feb 23 04:55:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e123 e123: 6 total, 6 up, 6 in Feb 23 04:55:51 localhost nova_compute[280321]: 2026-02-23 09:55:51.180 280325 DEBUG oslo_concurrency.lockutils [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Acquiring lock "refresh_cache-66c5eac8-f6f4-40ae-b09f-54e200c103b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:55:51 localhost nova_compute[280321]: 2026-02-23 09:55:51.181 280325 DEBUG oslo_concurrency.lockutils [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Acquired lock "refresh_cache-66c5eac8-f6f4-40ae-b09f-54e200c103b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:55:51 localhost nova_compute[280321]: 2026-02-23 09:55:51.183 280325 DEBUG nova.network.neutron [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 23 04:55:51 localhost dnsmasq[311275]: exiting on receipt of SIGTERM Feb 23 04:55:51 localhost podman[311292]: 2026-02-23 09:55:51.244589143 +0000 UTC m=+0.083694779 container kill a9a28face0f2660d5c7f3489a89def1ee3609eca506960d7e6aafb7cef112508 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1b10770-7724-41e6-ac5d-3e85f8e79115, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:55:51 localhost systemd[1]: libpod-a9a28face0f2660d5c7f3489a89def1ee3609eca506960d7e6aafb7cef112508.scope: Deactivated successfully. Feb 23 04:55:51 localhost podman[311307]: 2026-02-23 09:55:51.309563639 +0000 UTC m=+0.045628615 container died a9a28face0f2660d5c7f3489a89def1ee3609eca506960d7e6aafb7cef112508 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1b10770-7724-41e6-ac5d-3e85f8e79115, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:55:51 localhost podman[311307]: 2026-02-23 09:55:51.357172375 +0000 UTC m=+0.093237311 container remove a9a28face0f2660d5c7f3489a89def1ee3609eca506960d7e6aafb7cef112508 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1b10770-7724-41e6-ac5d-3e85f8e79115, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 04:55:51 localhost systemd[1]: libpod-conmon-a9a28face0f2660d5c7f3489a89def1ee3609eca506960d7e6aafb7cef112508.scope: Deactivated successfully. Feb 23 04:55:51 localhost nova_compute[280321]: 2026-02-23 09:55:51.415 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:51 localhost kernel: device tap58b57b0f-01 left promiscuous mode Feb 23 04:55:51 localhost nova_compute[280321]: 2026-02-23 09:55:51.427 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:51 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:51.489 263679 INFO neutron.agent.dhcp.agent [None req-2f5dd6b6-98fc-45ce-ae67-95915bbe500f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:51 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:51.490 263679 INFO neutron.agent.dhcp.agent [None req-2f5dd6b6-98fc-45ce-ae67-95915bbe500f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:51 localhost systemd[1]: var-lib-containers-storage-overlay-d15fa6c45487eaf9dfae15a6d4b08f12c20a97cdfbcdce5d6d21f89d71016014-merged.mount: Deactivated successfully. Feb 23 04:55:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9a28face0f2660d5c7f3489a89def1ee3609eca506960d7e6aafb7cef112508-userdata-shm.mount: Deactivated successfully. Feb 23 04:55:51 localhost systemd[1]: run-netns-qdhcp\x2de1b10770\x2d7724\x2d41e6\x2dac5d\x2d3e85f8e79115.mount: Deactivated successfully. Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.248 280325 DEBUG nova.network.neutron [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Updating instance_info_cache with network_info: [{"id": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "address": "fa:16:3e:77:96:92", "network": {"id": "488344bb-b2b1-4b3f-933b-1a9bfdff1d5c", "bridge": "br-int", "label": "tempest-LiveMigrationTest-664573168-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "2ac6a6009ea84eb99f60bd242e459002", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap622abaa1-26", "ovs_interfaceid": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.284 280325 DEBUG oslo_concurrency.lockutils [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Releasing lock "refresh_cache-66c5eac8-f6f4-40ae-b09f-54e200c103b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.286 280325 DEBUG nova.virt.libvirt.driver [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpyfndvbe3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='66c5eac8-f6f4-40ae-b09f-54e200c103b8',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.287 280325 DEBUG nova.virt.libvirt.driver [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Creating instance directory: /var/lib/nova/instances/66c5eac8-f6f4-40ae-b09f-54e200c103b8 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.287 280325 DEBUG nova.virt.libvirt.driver [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Ensure instance console log exists: /var/lib/nova/instances/66c5eac8-f6f4-40ae-b09f-54e200c103b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.289 280325 DEBUG nova.virt.libvirt.driver [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.290 280325 DEBUG nova.virt.libvirt.vif [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-23T09:55:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1016549855',display_name='tempest-LiveMigrationTest-server-1016549855',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(6),hidden=False,host='np0005626466.localdomain',hostname='tempest-livemigrationtest-server-1016549855',id=11,image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-23T09:55:47Z,launched_on='np0005626466.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005626466.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2ac6a6009ea84eb99f60bd242e459002',ramdisk_id='',reservation_id='r-piujj5se',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1138379142',owner_user_name='tempest-LiveMigrationTest-1138379142-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2026-02-23T09:55:47Z,user_data=None,user_id='6d15d44765db469a9e04a32fb56dcff2',uuid=66c5eac8-f6f4-40ae-b09f-54e200c103b8,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "address": "fa:16:3e:77:96:92", "network": {"id": "488344bb-b2b1-4b3f-933b-1a9bfdff1d5c", "bridge": "br-int", "label": "tempest-LiveMigrationTest-664573168-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "2ac6a6009ea84eb99f60bd242e459002", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap622abaa1-26", "ovs_interfaceid": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.291 280325 DEBUG nova.network.os_vif_util [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Converting VIF {"id": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "address": "fa:16:3e:77:96:92", "network": {"id": "488344bb-b2b1-4b3f-933b-1a9bfdff1d5c", "bridge": "br-int", "label": "tempest-LiveMigrationTest-664573168-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "2ac6a6009ea84eb99f60bd242e459002", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap622abaa1-26", "ovs_interfaceid": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.292 280325 DEBUG nova.network.os_vif_util [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:96:92,bridge_name='br-int',has_traffic_filtering=True,id=622abaa1-264f-4476-8fb5-acbc0c816b3f,network=Network(488344bb-b2b1-4b3f-933b-1a9bfdff1d5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap622abaa1-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.293 280325 DEBUG os_vif [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:96:92,bridge_name='br-int',has_traffic_filtering=True,id=622abaa1-264f-4476-8fb5-acbc0c816b3f,network=Network(488344bb-b2b1-4b3f-933b-1a9bfdff1d5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap622abaa1-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.294 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.294 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.295 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.299 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.300 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap622abaa1-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.300 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap622abaa1-26, col_values=(('external_ids', {'iface-id': '622abaa1-264f-4476-8fb5-acbc0c816b3f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:96:92', 'vm-uuid': '66c5eac8-f6f4-40ae-b09f-54e200c103b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.302 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.304 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.308 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.309 280325 INFO os_vif [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:96:92,bridge_name='br-int',has_traffic_filtering=True,id=622abaa1-264f-4476-8fb5-acbc0c816b3f,network=Network(488344bb-b2b1-4b3f-933b-1a9bfdff1d5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap622abaa1-26')#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.310 280325 DEBUG nova.virt.libvirt.driver [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.310 280325 DEBUG nova.compute.manager [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpyfndvbe3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='66c5eac8-f6f4-40ae-b09f-54e200c103b8',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Feb 23 04:55:52 localhost nova_compute[280321]: 2026-02-23 09:55:52.582 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:53 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e124 e124: 6 total, 6 up, 6 in Feb 23 04:55:53 localhost nova_compute[280321]: 2026-02-23 09:55:53.019 280325 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:55:53 localhost nova_compute[280321]: 2026-02-23 09:55:53.020 280325 INFO nova.compute.manager [-] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] VM Stopped (Lifecycle Event)#033[00m Feb 23 04:55:53 localhost nova_compute[280321]: 2026-02-23 09:55:53.059 280325 DEBUG nova.compute.manager [None req-12d805e4-5c1a-4789-b81b-13a1725bfaf3 - - - - - -] [instance: 85a9c2c0-3a8d-44ce-954f-e106841e2068] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:55:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 192 MiB data, 816 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 51 KiB/s wr, 465 op/s Feb 23 04:55:53 localhost systemd[1]: tmp-crun.LNl9jr.mount: Deactivated successfully. Feb 23 04:55:53 localhost podman[311353]: 2026-02-23 09:55:53.470995472 +0000 UTC m=+0.066972919 container kill 2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-52f0a190-d529-4e0c-bebe-cbd94ee3d830, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216) Feb 23 04:55:53 localhost dnsmasq[310337]: exiting on receipt of SIGTERM Feb 23 04:55:53 localhost systemd[1]: libpod-2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf.scope: Deactivated successfully. Feb 23 04:55:53 localhost podman[311365]: 2026-02-23 09:55:53.540527247 +0000 UTC m=+0.057382565 container died 2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-52f0a190-d529-4e0c-bebe-cbd94ee3d830, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:55:53 localhost podman[311365]: 2026-02-23 09:55:53.570010768 +0000 UTC m=+0.086866036 container cleanup 2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-52f0a190-d529-4e0c-bebe-cbd94ee3d830, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216) Feb 23 04:55:53 localhost systemd[1]: libpod-conmon-2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf.scope: Deactivated successfully. Feb 23 04:55:53 localhost podman[311367]: 2026-02-23 09:55:53.611711123 +0000 UTC m=+0.119621628 container remove 2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-52f0a190-d529-4e0c-bebe-cbd94ee3d830, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:55:53 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:53.641 263679 INFO neutron.agent.dhcp.agent [None req-9c312802-fc9c-4269-8642-06a88aac92b6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:53 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:53.934 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e125 e125: 6 total, 6 up, 6 in Feb 23 04:55:54 localhost systemd[1]: var-lib-containers-storage-overlay-8e298a442e1eb8ea3faa57fee9fa884fbd6c3a8cdc87830c883513c9529df3c4-merged.mount: Deactivated successfully. Feb 23 04:55:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c8464799a0e124403dc0cd449c6da2af58a8794f0e9bccc3e4a29ad0117d1cf-userdata-shm.mount: Deactivated successfully. Feb 23 04:55:54 localhost systemd[1]: run-netns-qdhcp\x2d52f0a190\x2dd529\x2d4e0c\x2dbebe\x2dcbd94ee3d830.mount: Deactivated successfully. Feb 23 04:55:54 localhost nova_compute[280321]: 2026-02-23 09:55:54.588 280325 DEBUG nova.network.neutron [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Port 622abaa1-264f-4476-8fb5-acbc0c816b3f updated with migration profile {'migrating_to': 'np0005626465.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Feb 23 04:55:54 localhost nova_compute[280321]: 2026-02-23 09:55:54.590 280325 DEBUG nova.compute.manager [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpyfndvbe3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='66c5eac8-f6f4-40ae-b09f-54e200c103b8',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Feb 23 04:55:54 localhost sshd[311392]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:55:54 localhost systemd[1]: Created slice User Slice of UID 42436. Feb 23 04:55:54 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Feb 23 04:55:54 localhost systemd-logind[759]: New session 71 of user nova. Feb 23 04:55:54 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Feb 23 04:55:54 localhost systemd[1]: Starting User Manager for UID 42436... Feb 23 04:55:55 localhost systemd[311396]: Queued start job for default target Main User Target. Feb 23 04:55:55 localhost systemd[311396]: Created slice User Application Slice. Feb 23 04:55:55 localhost systemd[311396]: Started Mark boot as successful after the user session has run 2 minutes. Feb 23 04:55:55 localhost systemd[311396]: Started Daily Cleanup of User's Temporary Directories. Feb 23 04:55:55 localhost systemd[311396]: Reached target Paths. Feb 23 04:55:55 localhost systemd[311396]: Reached target Timers. Feb 23 04:55:55 localhost systemd[311396]: Starting D-Bus User Message Bus Socket... Feb 23 04:55:55 localhost systemd[311396]: Starting Create User's Volatile Files and Directories... Feb 23 04:55:55 localhost systemd[311396]: Listening on D-Bus User Message Bus Socket. Feb 23 04:55:55 localhost systemd[311396]: Reached target Sockets. Feb 23 04:55:55 localhost systemd[311396]: Finished Create User's Volatile Files and Directories. Feb 23 04:55:55 localhost systemd[311396]: Reached target Basic System. Feb 23 04:55:55 localhost systemd[311396]: Reached target Main User Target. Feb 23 04:55:55 localhost systemd[311396]: Startup finished in 155ms. Feb 23 04:55:55 localhost systemd[1]: Started User Manager for UID 42436. Feb 23 04:55:55 localhost systemd[1]: Started Session 71 of User nova. Feb 23 04:55:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 192 MiB data, 816 MiB used, 41 GiB / 42 GiB avail; 4.8 MiB/s rd, 41 KiB/s wr, 374 op/s Feb 23 04:55:55 localhost kernel: device tap622abaa1-26 entered promiscuous mode Feb 23 04:55:55 localhost NetworkManager[5987]: [1771840555.2538] manager: (tap622abaa1-26): new Tun device (/org/freedesktop/NetworkManager/Devices/32) Feb 23 04:55:55 localhost systemd-udevd[311426]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:55:55 localhost NetworkManager[5987]: [1771840555.2920] device (tap622abaa1-26): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 23 04:55:55 localhost NetworkManager[5987]: [1771840555.2927] device (tap622abaa1-26): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Feb 23 04:55:55 localhost ovn_controller[155966]: 2026-02-23T09:55:55Z|00130|binding|INFO|Claiming lport 622abaa1-264f-4476-8fb5-acbc0c816b3f for this additional chassis. Feb 23 04:55:55 localhost ovn_controller[155966]: 2026-02-23T09:55:55Z|00131|binding|INFO|622abaa1-264f-4476-8fb5-acbc0c816b3f: Claiming fa:16:3e:77:96:92 10.100.0.13 Feb 23 04:55:55 localhost ovn_controller[155966]: 2026-02-23T09:55:55Z|00132|binding|INFO|Claiming lport adb72f75-7429-4a32-becc-fdcd334c7c29 for this additional chassis. Feb 23 04:55:55 localhost ovn_controller[155966]: 2026-02-23T09:55:55Z|00133|binding|INFO|adb72f75-7429-4a32-becc-fdcd334c7c29: Claiming fa:16:3e:c9:7a:38 19.80.0.148 Feb 23 04:55:55 localhost nova_compute[280321]: 2026-02-23 09:55:55.296 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:55 localhost systemd-machined[205673]: New machine qemu-4-instance-0000000b. Feb 23 04:55:55 localhost ovn_controller[155966]: 2026-02-23T09:55:55Z|00134|binding|INFO|Setting lport 622abaa1-264f-4476-8fb5-acbc0c816b3f ovn-installed in OVS Feb 23 04:55:55 localhost nova_compute[280321]: 2026-02-23 09:55:55.320 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:55 localhost nova_compute[280321]: 2026-02-23 09:55:55.322 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:55 localhost nova_compute[280321]: 2026-02-23 09:55:55.326 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:55 localhost systemd[1]: Started Virtual Machine qemu-4-instance-0000000b. Feb 23 04:55:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:55:55 localhost nova_compute[280321]: 2026-02-23 09:55:55.677 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:55:55 localhost nova_compute[280321]: 2026-02-23 09:55:55.677 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] VM Started (Lifecycle Event)#033[00m Feb 23 04:55:55 localhost nova_compute[280321]: 2026-02-23 09:55:55.704 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:55:56 localhost sshd[311481]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:55:56 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:56.399 2 INFO neutron.agent.securitygroups_rpc [None req-680f6195-a81a-4ace-93bb-ca63b4542035 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:55:56 localhost nova_compute[280321]: 2026-02-23 09:55:56.560 280325 DEBUG nova.virt.driver [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:55:56 localhost nova_compute[280321]: 2026-02-23 09:55:56.561 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] VM Resumed (Lifecycle Event)#033[00m Feb 23 04:55:56 localhost nova_compute[280321]: 2026-02-23 09:55:56.582 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:55:56 localhost nova_compute[280321]: 2026-02-23 09:55:56.587 280325 DEBUG nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 23 04:55:56 localhost nova_compute[280321]: 2026-02-23 09:55:56.608 280325 INFO nova.compute.manager [None req-faccda75-42e7-43f3-b3c8-7502434211a2 - - - - - -] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] During the sync_power process the instance has moved from host np0005626466.localdomain to host np0005626465.localdomain#033[00m Feb 23 04:55:56 localhost systemd[1]: session-71.scope: Deactivated successfully. Feb 23 04:55:56 localhost systemd-logind[759]: Session 71 logged out. Waiting for processes to exit. Feb 23 04:55:56 localhost systemd-logind[759]: Removed session 71. Feb 23 04:55:56 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e126 e126: 6 total, 6 up, 6 in Feb 23 04:55:56 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:56.996 263679 INFO neutron.agent.linux.ip_lib [None req-d74ca09e-e8ef-4c74-ae5a-ea854cb02dfe - - - - - -] Device tap305d2986-ac cannot be used as it has no MAC address#033[00m Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.019 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:57 localhost kernel: device tap305d2986-ac entered promiscuous mode Feb 23 04:55:57 localhost NetworkManager[5987]: [1771840557.0272] manager: (tap305d2986-ac): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00135|binding|INFO|Claiming lport 305d2986-acd5-4eef-9b52-da6f2b513f08 for this chassis. Feb 23 04:55:57 localhost systemd-udevd[311428]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00136|binding|INFO|305d2986-acd5-4eef-9b52-da6f2b513f08: Claiming unknown Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.028 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.042 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-9f213d66-4180-4fdc-8523-7b43ed501993', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f213d66-4180-4fdc-8523-7b43ed501993', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c50f2ae0a2f4cdd8225f6794547909b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3898408-83bc-49d7-9482-caa3c7eb6163, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=305d2986-acd5-4eef-9b52-da6f2b513f08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.043 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 305d2986-acd5-4eef-9b52-da6f2b513f08 in datapath 9f213d66-4180-4fdc-8523-7b43ed501993 bound to our chassis#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.044 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9f213d66-4180-4fdc-8523-7b43ed501993 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.045 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef9e886-4371-4794-bd64-920d18d9ea92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00137|binding|INFO|Setting lport 305d2986-acd5-4eef-9b52-da6f2b513f08 ovn-installed in OVS Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00138|binding|INFO|Setting lport 305d2986-acd5-4eef-9b52-da6f2b513f08 up in Southbound Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.061 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.103 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 192 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 12 KiB/s wr, 285 op/s Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.303 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00139|binding|INFO|Claiming lport 622abaa1-264f-4476-8fb5-acbc0c816b3f for this chassis. Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00140|binding|INFO|622abaa1-264f-4476-8fb5-acbc0c816b3f: Claiming fa:16:3e:77:96:92 10.100.0.13 Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00141|binding|INFO|Claiming lport adb72f75-7429-4a32-becc-fdcd334c7c29 for this chassis. Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00142|binding|INFO|adb72f75-7429-4a32-becc-fdcd334c7c29: Claiming fa:16:3e:c9:7a:38 19.80.0.148 Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00143|binding|INFO|Setting lport 622abaa1-264f-4476-8fb5-acbc0c816b3f up in Southbound Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00144|binding|INFO|Setting lport adb72f75-7429-4a32-becc-fdcd334c7c29 up in Southbound Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.363 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:7a:38 19.80.0.148'], port_security=['fa:16:3e:c9:7a:38 19.80.0.148'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['622abaa1-264f-4476-8fb5-acbc0c816b3f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-2088636283', 'neutron:cidrs': '19.80.0.148/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-2088636283', 'neutron:project_id': '2ac6a6009ea84eb99f60bd242e459002', 'neutron:revision_number': '3', 'neutron:security_group_ids': '917bfa8c-752a-4a55-9acc-5ce6144207b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=10f6c850-d5b0-4b68-95e9-d2dc898e2718, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=adb72f75-7429-4a32-becc-fdcd334c7c29) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:57 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:57.367 2 INFO neutron.agent.securitygroups_rpc [None req-e495f557-19e6-4684-b099-a0de07319228 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.368 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:96:92 10.100.0.13'], port_security=['fa:16:3e:77:96:92 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-967314267', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '66c5eac8-f6f4-40ae-b09f-54e200c103b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-967314267', 'neutron:project_id': '2ac6a6009ea84eb99f60bd242e459002', 'neutron:revision_number': '10', 'neutron:security_group_ids': '917bfa8c-752a-4a55-9acc-5ce6144207b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626466.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82ecfbd-c671-4216-ac11-086490c80ba6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=622abaa1-264f-4476-8fb5-acbc0c816b3f) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.371 161842 INFO neutron.agent.ovn.metadata.agent [-] Port adb72f75-7429-4a32-becc-fdcd334c7c29 in datapath 9eb5761d-94a8-4798-bed6-a9e5cf6518df bound to our chassis#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.376 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port 83869ba6-ac91-4ad2-af02-0a920414fa9f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.377 161842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9eb5761d-94a8-4798-bed6-a9e5cf6518df#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.391 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[8e347d17-8fd4-4c07-9567-ada6fe9476ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.393 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9eb5761d-91 in ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.395 306186 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9eb5761d-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.395 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa7d7b8-9e74-4072-af89-67aafdfab811]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.398 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b21093-3385-448f-b02d-4599d03394da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost neutron_sriov_agent[256355]: 2026-02-23 09:55:57.414 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-e1ec8450-1d64-4269-9270-68be44af74bd req-0b458214-ea63-4575-87b4-bc9a482881d4 3c44fb58efc2456dbeb36da67016dc5a d351b5d019cd497ab1d84160f10b653c - - default default] This port is not SRIOV, skip binding for port 622abaa1-264f-4476-8fb5-acbc0c816b3f.#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.415 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[37276140-d6cd-4bc3-a863-aafd2bd3f6ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.441 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb2bca2-f305-4001-8531-9fd6740f00e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.474 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbd823d-ba1b-42fd-86ca-254fb2edcd9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.481 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[0530479c-1694-4d07-b09c-6db8a7dbf4ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost NetworkManager[5987]: [1771840557.4847] manager: (tap9eb5761d-90): new Veth device (/org/freedesktop/NetworkManager/Devices/34) Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.519 280325 INFO nova.compute.manager [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Post operation of migration started#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.523 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[266dd6f7-c993-4e6c-a208-3c08cd86977c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.526 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[04b4e119-71f4-4ade-817e-6bef9d3469ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost NetworkManager[5987]: [1771840557.5451] device (tap9eb5761d-90): carrier: link connected Feb 23 04:55:57 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9eb5761d-91: link becomes ready Feb 23 04:55:57 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9eb5761d-90: link becomes ready Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.549 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae18576-3481-49dc-9c15-4af1f1ec2573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.564 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[ca745710-d5ba-4d47-a612-b44d935adacd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9eb5761d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e4:6c:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1198869, 'reachable_time': 39754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311546, 'error': None, 'target': 'ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.580 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[e04f072e-63e6-4289-ae84-1c2322c7aa8e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:6caf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1198869, 'tstamp': 1198869}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311548, 'error': None, 'target': 'ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.585 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.600 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[1480822c-5cf8-4416-99c2-a9669a19df65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9eb5761d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e4:6c:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1198869, 'reachable_time': 39754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311549, 'error': None, 'target': 'ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.614 280325 DEBUG oslo_concurrency.lockutils [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Acquiring lock "refresh_cache-66c5eac8-f6f4-40ae-b09f-54e200c103b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.614 280325 DEBUG oslo_concurrency.lockutils [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Acquired lock "refresh_cache-66c5eac8-f6f4-40ae-b09f-54e200c103b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.615 280325 DEBUG nova.network.neutron [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.631 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[bdebe1b8-65f7-46ea-8b76-3da460efe9e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.692 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[74fcfc1b-768c-48db-8683-84cbf2178eb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.694 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9eb5761d-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.695 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.696 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9eb5761d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.698 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:57 localhost kernel: device tap9eb5761d-90 entered promiscuous mode Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.700 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.701 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9eb5761d-90, col_values=(('external_ids', {'iface-id': '0a17f90d-6861-4307-9c4d-59966b232581'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.703 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:57 localhost ovn_controller[155966]: 2026-02-23T09:55:57Z|00145|binding|INFO|Releasing lport 0a17f90d-6861-4307-9c4d-59966b232581 from this chassis (sb_readonly=0) Feb 23 04:55:57 localhost nova_compute[280321]: 2026-02-23 09:55:57.716 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.717 161842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9eb5761d-94a8-4798-bed6-a9e5cf6518df.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9eb5761d-94a8-4798-bed6-a9e5cf6518df.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.718 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[e541d2ed-49f2-4116-a546-678d9a950d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.719 161842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: global Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: log /dev/log local0 debug Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: log-tag haproxy-metadata-proxy-9eb5761d-94a8-4798-bed6-a9e5cf6518df Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: user root Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: group root Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: maxconn 1024 Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: pidfile /var/lib/neutron/external/pids/9eb5761d-94a8-4798-bed6-a9e5cf6518df.pid.haproxy Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: daemon Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: defaults Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: log global Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: mode http Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: option httplog Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: option dontlognull Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: option http-server-close Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: option forwardfor Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: retries 3 Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: timeout http-request 30s Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: timeout connect 30s Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: timeout client 32s Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: timeout server 32s Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: timeout http-keep-alive 30s Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: listen listener Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: bind 169.254.169.254:80 Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: server metadata /var/lib/neutron/metadata_proxy Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: http-request add-header X-OVN-Network-ID 9eb5761d-94a8-4798-bed6-a9e5cf6518df Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 23 04:55:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:57.720 161842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'env', 'PROCESS_TAG=haproxy-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9eb5761d-94a8-4798-bed6-a9e5cf6518df.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 23 04:55:57 localhost podman[311581]: Feb 23 04:55:57 localhost podman[311581]: 2026-02-23 09:55:57.946463659 +0000 UTC m=+0.101534915 container create f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f213d66-4180-4fdc-8523-7b43ed501993, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS) Feb 23 04:55:58 localhost podman[311581]: 2026-02-23 09:55:57.900784533 +0000 UTC m=+0.055855819 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:55:58 localhost systemd[1]: Started libpod-conmon-f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3.scope. Feb 23 04:55:58 localhost systemd[1]: tmp-crun.euwiE1.mount: Deactivated successfully. Feb 23 04:55:58 localhost systemd[1]: Started libcrun container. Feb 23 04:55:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946bfb4e9f6269a5f6ba4a4efb06dc094b654daba7ecf41e42ee69cbb6295f01/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:58 localhost podman[311581]: 2026-02-23 09:55:58.109210223 +0000 UTC m=+0.264281479 container init f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f213d66-4180-4fdc-8523-7b43ed501993, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2) Feb 23 04:55:58 localhost podman[311581]: 2026-02-23 09:55:58.1179065 +0000 UTC m=+0.272977766 container start f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f213d66-4180-4fdc-8523-7b43ed501993, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:55:58 localhost dnsmasq[311630]: started, version 2.85 cachesize 150 Feb 23 04:55:58 localhost dnsmasq[311630]: DNS service limited to local subnets Feb 23 04:55:58 localhost dnsmasq[311630]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:55:58 localhost dnsmasq[311630]: warning: no upstream servers configured Feb 23 04:55:58 localhost dnsmasq-dhcp[311630]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 04:55:58 localhost dnsmasq[311630]: read /var/lib/neutron/dhcp/9f213d66-4180-4fdc-8523-7b43ed501993/addn_hosts - 0 addresses Feb 23 04:55:58 localhost dnsmasq-dhcp[311630]: read /var/lib/neutron/dhcp/9f213d66-4180-4fdc-8523-7b43ed501993/host Feb 23 04:55:58 localhost dnsmasq-dhcp[311630]: read /var/lib/neutron/dhcp/9f213d66-4180-4fdc-8523-7b43ed501993/opts Feb 23 04:55:58 localhost podman[311620]: Feb 23 04:55:58 localhost podman[311620]: 2026-02-23 09:55:58.175393337 +0000 UTC m=+0.093577411 container create b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0) Feb 23 04:55:58 localhost systemd[1]: Started libpod-conmon-b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc.scope. Feb 23 04:55:58 localhost systemd[1]: Started libcrun container. Feb 23 04:55:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59217ff29d8a8785d9ddf2ca400e1f0d79f1ada9c19a095456e7d12faf183037/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:58 localhost podman[311620]: 2026-02-23 09:55:58.228619664 +0000 UTC m=+0.146803738 container init b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0) Feb 23 04:55:58 localhost podman[311620]: 2026-02-23 09:55:58.236476464 +0000 UTC m=+0.154660528 container start b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:55:58 localhost podman[311620]: 2026-02-23 09:55:58.138976534 +0000 UTC m=+0.057160638 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.244 280325 DEBUG nova.network.neutron [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Updating instance_info_cache with network_info: [{"id": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "address": "fa:16:3e:77:96:92", "network": {"id": "488344bb-b2b1-4b3f-933b-1a9bfdff1d5c", "bridge": "br-int", "label": "tempest-LiveMigrationTest-664573168-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "2ac6a6009ea84eb99f60bd242e459002", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap622abaa1-26", "ovs_interfaceid": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:55:58 localhost neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df[311637]: [NOTICE] (311641) : New worker (311643) forked Feb 23 04:55:58 localhost neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df[311637]: [NOTICE] (311641) : Loading success. Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.302 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 622abaa1-264f-4476-8fb5-acbc0c816b3f in datapath 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c bound to our chassis#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.304 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port 356a7bb3-8843-4d5f-9f1c-bae2fad82cfb IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.305 161842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.314 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[470a4946-11f0-409b-b643-c8ae0fc48c3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.315 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap488344bb-b1 in ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.317 306186 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap488344bb-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.317 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[1af6eb05-5f7d-4ae1-89fb-4da927e96772]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.317 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5cf35a-9526-4bc9-b058-91703bb87347]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.328 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[86e85160-7006-47e8-befe-a16f1b56ef7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.344 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[207849da-503f-4637-bf6b-99449d63b7d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.357 280325 DEBUG oslo_concurrency.lockutils [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Releasing lock "refresh_cache-66c5eac8-f6f4-40ae-b09f-54e200c103b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.374 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[2a136cc8-f3bd-4b65-89a3-d9b7b0a96a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.378 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fc15ff-0ba2-4721-bf7d-7073581ae690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost NetworkManager[5987]: [1771840558.3795] manager: (tap488344bb-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/35) Feb 23 04:55:58 localhost systemd-udevd[311541]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:55:58 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:58.384 263679 INFO neutron.agent.dhcp.agent [None req-a523f7a2-966d-47bf-a693-1cda598267fc - - - - - -] DHCP configuration for ports {'911dde62-e17d-4ecf-8154-b7974525954e'} is completed#033[00m Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.384 280325 DEBUG oslo_concurrency.lockutils [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.385 280325 DEBUG oslo_concurrency.lockutils [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.386 280325 DEBUG oslo_concurrency.lockutils [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.399 280325 INFO nova.virt.libvirt.driver [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Feb 23 04:55:58 localhost journal[228928]: Domain id=4 name='instance-0000000b' uuid=66c5eac8-f6f4-40ae-b09f-54e200c103b8 is tainted: custom-monitor Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.408 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe53202-c0d8-45cc-8217-6eb6be892f13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.411 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[dd30f7d0-626d-40cf-a202-ceaa1caca8ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap488344bb-b0: link becomes ready Feb 23 04:55:58 localhost NetworkManager[5987]: [1771840558.4235] device (tap488344bb-b0): carrier: link connected Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.426 307322 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ab7191-0bc4-4ee7-9ca1-d3803f59b39d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.441 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[3a86cba1-f731-48fc-8b2a-f1612725b279]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap488344bb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:32:b3:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1198956, 'reachable_time': 24558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311662, 'error': None, 'target': 'ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.454 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ed3bf1-978f-43b4-83c4-c2f1baad68a2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:b34f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1198956, 'tstamp': 1198956}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311663, 'error': None, 'target': 'ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.468 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0b8a4c-da59-42cb-862f-66055fe79280]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap488344bb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:32:b3:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1198956, 'reachable_time': 24558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311664, 'error': None, 'target': 'ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.487 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[af820059-fb2c-4b45-9d04-fab7522c0501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.559 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d74ddda1-cbdf-45e3-8947-cb871c205e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.561 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap488344bb-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.561 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.562 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap488344bb-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:58 localhost kernel: device tap488344bb-b0 entered promiscuous mode Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.570 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap488344bb-b0, col_values=(('external_ids', {'iface-id': 'b411e7e0-662a-45b2-a6f4-7c955d5b0de7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.566 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:58 localhost ovn_controller[155966]: 2026-02-23T09:55:58Z|00146|binding|INFO|Releasing lport b411e7e0-662a-45b2-a6f4-7c955d5b0de7 from this chassis (sb_readonly=0) Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.575 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.577 161842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.579 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.581 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff65db8-e948-42b2-bf49-008206ac3d64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.582 161842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: global Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: log /dev/log local0 debug Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: log-tag haproxy-metadata-proxy-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: user root Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: group root Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: maxconn 1024 Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: pidfile /var/lib/neutron/external/pids/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c.pid.haproxy Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: daemon Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: defaults Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: log global Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: mode http Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: option httplog Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: option dontlognull Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: option http-server-close Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: option forwardfor Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: retries 3 Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: timeout http-request 30s Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: timeout connect 30s Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: timeout client 32s Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: timeout server 32s Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: timeout http-keep-alive 30s Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: listen listener Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: bind 169.254.169.254:80 Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: server metadata /var/lib/neutron/metadata_proxy Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: http-request add-header X-OVN-Network-ID 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.584 161842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'env', 'PROCESS_TAG=haproxy-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 23 04:55:58 localhost dnsmasq[311630]: exiting on receipt of SIGTERM Feb 23 04:55:58 localhost podman[311689]: 2026-02-23 09:55:58.774728128 +0000 UTC m=+0.071863398 container kill f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f213d66-4180-4fdc-8523-7b43ed501993, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:55:58 localhost systemd[1]: libpod-f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3.scope: Deactivated successfully. Feb 23 04:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:55:58 localhost podman[311702]: 2026-02-23 09:55:58.917287526 +0000 UTC m=+0.124070034 container died f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f213d66-4180-4fdc-8523-7b43ed501993, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0) Feb 23 04:55:58 localhost podman[311710]: 2026-02-23 09:55:58.922566807 +0000 UTC m=+0.113566393 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:55:58 localhost podman[311717]: 2026-02-23 09:55:58.889908069 +0000 UTC m=+0.081162472 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_id=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 04:55:58 localhost podman[311702]: 2026-02-23 09:55:58.945623232 +0000 UTC m=+0.152405700 container cleanup f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f213d66-4180-4fdc-8523-7b43ed501993, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:55:58 localhost systemd[1]: libpod-conmon-f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3.scope: Deactivated successfully. Feb 23 04:55:58 localhost systemd[1]: var-lib-containers-storage-overlay-946bfb4e9f6269a5f6ba4a4efb06dc094b654daba7ecf41e42ee69cbb6295f01-merged.mount: Deactivated successfully. Feb 23 04:55:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3-userdata-shm.mount: Deactivated successfully. Feb 23 04:55:58 localhost podman[311710]: 2026-02-23 09:55:58.959838536 +0000 UTC m=+0.150838072 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:55:58 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:55:58 localhost podman[311717]: 2026-02-23 09:55:58.97402786 +0000 UTC m=+0.165282253 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1770267347, build-date=2026-02-05T04:57:10Z, distribution-scope=public, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Feb 23 04:55:58 localhost ovn_controller[155966]: 2026-02-23T09:55:58Z|00147|binding|INFO|Removing iface tap305d2986-ac ovn-installed in OVS Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.987 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 10ee843d-1ffa-4b04-9c98-f91dd0c75ab1 with type ""#033[00m Feb 23 04:55:58 localhost ovn_controller[155966]: 2026-02-23T09:55:58Z|00148|binding|INFO|Removing lport 305d2986-acd5-4eef-9b52-da6f2b513f08 ovn-installed in OVS Feb 23 04:55:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:58.988 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-9f213d66-4180-4fdc-8523-7b43ed501993', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f213d66-4180-4fdc-8523-7b43ed501993', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c50f2ae0a2f4cdd8225f6794547909b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3898408-83bc-49d7-9482-caa3c7eb6163, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=305d2986-acd5-4eef-9b52-da6f2b513f08) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.989 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:58 localhost nova_compute[280321]: 2026-02-23 09:55:58.997 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:58 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:55:59 localhost podman[311704]: 2026-02-23 09:55:59.087357585 +0000 UTC m=+0.287581802 container remove f69421ac60affbcb15cac6f8bef510fe6bfafd7ff86c46a8d39496e693b613f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f213d66-4180-4fdc-8523-7b43ed501993, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:55:59 localhost nova_compute[280321]: 2026-02-23 09:55:59.102 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:59 localhost kernel: device tap305d2986-ac left promiscuous mode Feb 23 04:55:59 localhost nova_compute[280321]: 2026-02-23 09:55:59.115 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 192 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 4.6 KiB/s wr, 92 op/s Feb 23 04:55:59 localhost systemd[1]: run-netns-qdhcp\x2d9f213d66\x2d4180\x2d4fdc\x2d8523\x2d7b43ed501993.mount: Deactivated successfully. Feb 23 04:55:59 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:59.135 263679 INFO neutron.agent.dhcp.agent [None req-cee92151-7609-49e9-b035-a980d064f468 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:59 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:55:59.136 263679 INFO neutron.agent.dhcp.agent [None req-cee92151-7609-49e9-b035-a980d064f468 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:55:59 localhost podman[311788]: Feb 23 04:55:59 localhost podman[311788]: 2026-02-23 09:55:59.185769773 +0000 UTC m=+0.118361580 container create 6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:55:59 localhost systemd[1]: Started libpod-conmon-6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef.scope. Feb 23 04:55:59 localhost systemd[1]: Started libcrun container. Feb 23 04:55:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef89a0a5be39a134383005089c288672f4796afcc02dbbe5d299744085f3be35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:55:59 localhost podman[311788]: 2026-02-23 09:55:59.237629218 +0000 UTC m=+0.170221025 container init 6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:55:59 localhost podman[311788]: 2026-02-23 09:55:59.139695424 +0000 UTC m=+0.072287271 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 23 04:55:59 localhost podman[311788]: 2026-02-23 09:55:59.247070707 +0000 UTC m=+0.179662514 container start 6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:55:59 localhost neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c[311806]: [NOTICE] (311810) : New worker (311812) forked Feb 23 04:55:59 localhost neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c[311806]: [NOTICE] (311810) : Loading success. Feb 23 04:55:59 localhost ovn_controller[155966]: 2026-02-23T09:55:59Z|00149|binding|INFO|Releasing lport 0a17f90d-6861-4307-9c4d-59966b232581 from this chassis (sb_readonly=0) Feb 23 04:55:59 localhost ovn_controller[155966]: 2026-02-23T09:55:59Z|00150|binding|INFO|Releasing lport b411e7e0-662a-45b2-a6f4-7c955d5b0de7 from this chassis (sb_readonly=0) Feb 23 04:55:59 localhost nova_compute[280321]: 2026-02-23 09:55:59.294 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:55:59 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:59.303 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 305d2986-acd5-4eef-9b52-da6f2b513f08 in datapath 9f213d66-4180-4fdc-8523-7b43ed501993 unbound from our chassis#033[00m Feb 23 04:55:59 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:59.304 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9f213d66-4180-4fdc-8523-7b43ed501993 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:55:59 localhost ovn_metadata_agent[161837]: 2026-02-23 09:55:59.305 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ce2d42-7544-40fd-897a-37727f7c5a76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:55:59 localhost nova_compute[280321]: 2026-02-23 09:55:59.407 280325 INFO nova.virt.libvirt.driver [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Feb 23 04:55:59 localhost ovn_controller[155966]: 2026-02-23T09:55:59Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:96:92 10.100.0.13 Feb 23 04:55:59 localhost ovn_controller[155966]: 2026-02-23T09:55:59Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:96:92 10.100.0.13 Feb 23 04:55:59 localhost systemd[1]: tmp-crun.wqYKKp.mount: Deactivated successfully. Feb 23 04:56:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:00 localhost nova_compute[280321]: 2026-02-23 09:56:00.413 280325 INFO nova.virt.libvirt.driver [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Feb 23 04:56:00 localhost nova_compute[280321]: 2026-02-23 09:56:00.419 280325 DEBUG nova.compute.manager [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:56:00 localhost nova_compute[280321]: 2026-02-23 09:56:00.441 280325 DEBUG nova.objects.instance [None req-e1ec8450-1d64-4269-9270-68be44af74bd 309578623e654f0d8491c3493a21c2b3 5632ff1108264def864ca9b5473cb716 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Feb 23 04:56:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:00.870 263679 INFO neutron.agent.linux.ip_lib [None req-d49c3366-ac9f-41a6-aacf-2522818189df - - - - - -] Device tap82d1d8ee-c4 cannot be used as it has no MAC address#033[00m Feb 23 04:56:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:00.889 263679 INFO neutron.agent.linux.ip_lib [None req-a5fbc832-c4db-42eb-8590-aaf1dae0d9d6 - - - - - -] Device tapc7e823d0-1e cannot be used as it has no MAC address#033[00m Feb 23 04:56:00 localhost nova_compute[280321]: 2026-02-23 09:56:00.903 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:00 localhost kernel: device tap82d1d8ee-c4 entered promiscuous mode Feb 23 04:56:00 localhost NetworkManager[5987]: [1771840560.9113] manager: (tap82d1d8ee-c4): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Feb 23 04:56:00 localhost ovn_controller[155966]: 2026-02-23T09:56:00Z|00151|binding|INFO|Claiming lport 82d1d8ee-c402-4e4b-857d-ac36f1f1e79d for this chassis. Feb 23 04:56:00 localhost ovn_controller[155966]: 2026-02-23T09:56:00Z|00152|binding|INFO|82d1d8ee-c402-4e4b-857d-ac36f1f1e79d: Claiming unknown Feb 23 04:56:00 localhost nova_compute[280321]: 2026-02-23 09:56:00.913 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:00.925 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-f53aefcd-9a58-44f1-8f89-81005c5482a8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f53aefcd-9a58-44f1-8f89-81005c5482a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5dde32a00eb94a4cad27787f37a31b50', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d015d5b-281d-4214-975e-0e49d021e34d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=82d1d8ee-c402-4e4b-857d-ac36f1f1e79d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:00.927 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 82d1d8ee-c402-4e4b-857d-ac36f1f1e79d in datapath f53aefcd-9a58-44f1-8f89-81005c5482a8 bound to our chassis#033[00m Feb 23 04:56:00 localhost nova_compute[280321]: 2026-02-23 09:56:00.928 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:00.929 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f53aefcd-9a58-44f1-8f89-81005c5482a8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:56:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:00.930 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[aa438989-97ac-4316-bab2-a4920d7d1c17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:00 localhost kernel: device tapc7e823d0-1e entered promiscuous mode Feb 23 04:56:00 localhost NetworkManager[5987]: [1771840560.9604] manager: (tapc7e823d0-1e): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Feb 23 04:56:00 localhost ovn_controller[155966]: 2026-02-23T09:56:00Z|00153|binding|INFO|Setting lport 82d1d8ee-c402-4e4b-857d-ac36f1f1e79d ovn-installed in OVS Feb 23 04:56:00 localhost ovn_controller[155966]: 2026-02-23T09:56:00Z|00154|binding|INFO|Setting lport 82d1d8ee-c402-4e4b-857d-ac36f1f1e79d up in Southbound Feb 23 04:56:00 localhost ovn_controller[155966]: 2026-02-23T09:56:00Z|00155|if_status|INFO|Not updating pb chassis for c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e now as sb is readonly Feb 23 04:56:00 localhost nova_compute[280321]: 2026-02-23 09:56:00.962 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:00 localhost ovn_controller[155966]: 2026-02-23T09:56:00Z|00156|binding|INFO|Releasing lport 0a17f90d-6861-4307-9c4d-59966b232581 from this chassis (sb_readonly=0) Feb 23 04:56:00 localhost ovn_controller[155966]: 2026-02-23T09:56:00Z|00157|binding|INFO|Releasing lport b411e7e0-662a-45b2-a6f4-7c955d5b0de7 from this chassis (sb_readonly=0) Feb 23 04:56:00 localhost ovn_controller[155966]: 2026-02-23T09:56:00Z|00158|binding|INFO|Claiming lport c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e for this chassis. Feb 23 04:56:00 localhost ovn_controller[155966]: 2026-02-23T09:56:00Z|00159|binding|INFO|c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e: Claiming unknown Feb 23 04:56:00 localhost nova_compute[280321]: 2026-02-23 09:56:00.983 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:00.993 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-0c264690-f25c-4167-b007-0eb8e2043cc5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c264690-f25c-4167-b007-0eb8e2043cc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aef0e3b7cdf44eda8494e7e5c41be9f9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc5c7dd9-22ae-405c-8a3b-bb7a785ed78c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:00.994 161842 INFO neutron.agent.ovn.metadata.agent [-] Port c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e in datapath 0c264690-f25c-4167-b007-0eb8e2043cc5 bound to our chassis#033[00m Feb 23 04:56:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:00.996 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0c264690-f25c-4167-b007-0eb8e2043cc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:56:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:00.997 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[fa01bd07-4056-429d-8553-e43a8655c177]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:01 localhost ovn_controller[155966]: 2026-02-23T09:56:01Z|00160|binding|INFO|Setting lport c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e ovn-installed in OVS Feb 23 04:56:01 localhost ovn_controller[155966]: 2026-02-23T09:56:01Z|00161|binding|INFO|Setting lport c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e up in Southbound Feb 23 04:56:01 localhost nova_compute[280321]: 2026-02-23 09:56:01.017 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:01 localhost nova_compute[280321]: 2026-02-23 09:56:01.047 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:01 localhost nova_compute[280321]: 2026-02-23 09:56:01.093 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 202 MiB data, 885 MiB used, 41 GiB / 42 GiB avail; 296 KiB/s rd, 1.5 MiB/s wr, 113 op/s Feb 23 04:56:01 localhost nova_compute[280321]: 2026-02-23 09:56:01.243 280325 DEBUG nova.compute.manager [req-9b26bd2a-b0b0-415f-aae0-e1c9a4c32f45 req-e48d2dbb-c756-417a-b249-d5c3a395308a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Received event network-vif-plugged-622abaa1-264f-4476-8fb5-acbc0c816b3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:56:01 localhost nova_compute[280321]: 2026-02-23 09:56:01.244 280325 DEBUG oslo_concurrency.lockutils [req-9b26bd2a-b0b0-415f-aae0-e1c9a4c32f45 req-e48d2dbb-c756-417a-b249-d5c3a395308a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:56:01 localhost nova_compute[280321]: 2026-02-23 09:56:01.245 280325 DEBUG oslo_concurrency.lockutils [req-9b26bd2a-b0b0-415f-aae0-e1c9a4c32f45 req-e48d2dbb-c756-417a-b249-d5c3a395308a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:56:01 localhost nova_compute[280321]: 2026-02-23 09:56:01.246 280325 DEBUG oslo_concurrency.lockutils [req-9b26bd2a-b0b0-415f-aae0-e1c9a4c32f45 req-e48d2dbb-c756-417a-b249-d5c3a395308a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:56:01 localhost nova_compute[280321]: 2026-02-23 09:56:01.246 280325 DEBUG nova.compute.manager [req-9b26bd2a-b0b0-415f-aae0-e1c9a4c32f45 req-e48d2dbb-c756-417a-b249-d5c3a395308a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] No waiting events found dispatching network-vif-plugged-622abaa1-264f-4476-8fb5-acbc0c816b3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 23 04:56:01 localhost nova_compute[280321]: 2026-02-23 09:56:01.247 280325 WARNING nova.compute.manager [req-9b26bd2a-b0b0-415f-aae0-e1c9a4c32f45 req-e48d2dbb-c756-417a-b249-d5c3a395308a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Received unexpected event network-vif-plugged-622abaa1-264f-4476-8fb5-acbc0c816b3f for instance with vm_state active and task_state None.#033[00m Feb 23 04:56:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 e127: 6 total, 6 up, 6 in Feb 23 04:56:02 localhost openstack_network_exporter[243519]: ERROR 09:56:02 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:56:02 localhost openstack_network_exporter[243519]: Feb 23 04:56:02 localhost openstack_network_exporter[243519]: ERROR 09:56:02 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:56:02 localhost openstack_network_exporter[243519]: Feb 23 04:56:02 localhost podman[311938]: Feb 23 04:56:02 localhost podman[311938]: 2026-02-23 09:56:02.061581781 +0000 UTC m=+0.151667517 container create dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c264690-f25c-4167-b007-0eb8e2043cc5, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 04:56:02 localhost podman[311954]: Feb 23 04:56:02 localhost podman[311954]: 2026-02-23 09:56:02.096646354 +0000 UTC m=+0.107045463 container create 422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f53aefcd-9a58-44f1-8f89-81005c5482a8, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216) Feb 23 04:56:02 localhost systemd[1]: Started libpod-conmon-dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb.scope. Feb 23 04:56:02 localhost podman[311938]: 2026-02-23 09:56:02.017853265 +0000 UTC m=+0.107939081 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:56:02 localhost systemd[1]: tmp-crun.sU4sWd.mount: Deactivated successfully. Feb 23 04:56:02 localhost systemd[1]: Started libpod-conmon-422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3.scope. Feb 23 04:56:02 localhost systemd[1]: Started libcrun container. Feb 23 04:56:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76128798d0fcab66800dc16b733f092e9a9b66d1dd66731b55abb24037b906f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:56:02 localhost systemd[1]: Started libcrun container. Feb 23 04:56:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43c8c561664ee2f6afaf6d6635d6b8972130ff641b5f898a2b2bf762675dc45/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:56:02 localhost podman[311938]: 2026-02-23 09:56:02.150626523 +0000 UTC m=+0.240712259 container init dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c264690-f25c-4167-b007-0eb8e2043cc5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 23 04:56:02 localhost podman[311954]: 2026-02-23 09:56:02.156096961 +0000 UTC m=+0.166496070 container init 422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f53aefcd-9a58-44f1-8f89-81005c5482a8, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:56:02 localhost podman[311954]: 2026-02-23 09:56:02.056243398 +0000 UTC m=+0.066642567 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:56:02 localhost podman[311938]: 2026-02-23 09:56:02.158997389 +0000 UTC m=+0.249083125 container start dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c264690-f25c-4167-b007-0eb8e2043cc5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:56:02 localhost dnsmasq[311980]: started, version 2.85 cachesize 150 Feb 23 04:56:02 localhost dnsmasq[311980]: DNS service limited to local subnets Feb 23 04:56:02 localhost dnsmasq[311980]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:56:02 localhost dnsmasq[311980]: warning: no upstream servers configured Feb 23 04:56:02 localhost dnsmasq-dhcp[311980]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:56:02 localhost dnsmasq[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/addn_hosts - 0 addresses Feb 23 04:56:02 localhost dnsmasq-dhcp[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/host Feb 23 04:56:02 localhost dnsmasq-dhcp[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/opts Feb 23 04:56:02 localhost podman[311954]: 2026-02-23 09:56:02.165457546 +0000 UTC m=+0.175856685 container start 422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f53aefcd-9a58-44f1-8f89-81005c5482a8, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 23 04:56:02 localhost dnsmasq[311981]: started, version 2.85 cachesize 150 Feb 23 04:56:02 localhost dnsmasq[311981]: DNS service limited to local subnets Feb 23 04:56:02 localhost dnsmasq[311981]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:56:02 localhost dnsmasq[311981]: warning: no upstream servers configured Feb 23 04:56:02 localhost dnsmasq-dhcp[311981]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:56:02 localhost dnsmasq[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/addn_hosts - 0 addresses Feb 23 04:56:02 localhost dnsmasq-dhcp[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/host Feb 23 04:56:02 localhost dnsmasq-dhcp[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/opts Feb 23 04:56:02 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:02.287 263679 INFO neutron.agent.dhcp.agent [None req-b5b319cd-1afd-4a43-913d-f81b0eba1466 - - - - - -] DHCP configuration for ports {'aa3a9e6c-d0bb-4b7a-8b51-9ded9f8c9162', 'f09d6e7c-471a-4d0d-8240-e06ffe48dc0b'} is completed#033[00m Feb 23 04:56:02 localhost nova_compute[280321]: 2026-02-23 09:56:02.305 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:02 localhost nova_compute[280321]: 2026-02-23 09:56:02.589 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 225 MiB data, 916 MiB used, 41 GiB / 42 GiB avail; 541 KiB/s rd, 3.2 MiB/s wr, 164 op/s Feb 23 04:56:03 localhost nova_compute[280321]: 2026-02-23 09:56:03.383 280325 DEBUG nova.compute.manager [req-358ca957-76a6-4b4c-8aec-ba0a6f44c766 req-7d3eee72-09b9-4cb8-8688-92877775880a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Received event network-vif-plugged-622abaa1-264f-4476-8fb5-acbc0c816b3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:56:03 localhost nova_compute[280321]: 2026-02-23 09:56:03.385 280325 DEBUG oslo_concurrency.lockutils [req-358ca957-76a6-4b4c-8aec-ba0a6f44c766 req-7d3eee72-09b9-4cb8-8688-92877775880a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:56:03 localhost nova_compute[280321]: 2026-02-23 09:56:03.386 280325 DEBUG oslo_concurrency.lockutils [req-358ca957-76a6-4b4c-8aec-ba0a6f44c766 req-7d3eee72-09b9-4cb8-8688-92877775880a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:56:03 localhost nova_compute[280321]: 2026-02-23 09:56:03.386 280325 DEBUG oslo_concurrency.lockutils [req-358ca957-76a6-4b4c-8aec-ba0a6f44c766 req-7d3eee72-09b9-4cb8-8688-92877775880a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:56:03 localhost nova_compute[280321]: 2026-02-23 09:56:03.386 280325 DEBUG nova.compute.manager [req-358ca957-76a6-4b4c-8aec-ba0a6f44c766 req-7d3eee72-09b9-4cb8-8688-92877775880a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] No waiting events found dispatching network-vif-plugged-622abaa1-264f-4476-8fb5-acbc0c816b3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 23 04:56:03 localhost nova_compute[280321]: 2026-02-23 09:56:03.386 280325 WARNING nova.compute.manager [req-358ca957-76a6-4b4c-8aec-ba0a6f44c766 req-7d3eee72-09b9-4cb8-8688-92877775880a 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Received unexpected event network-vif-plugged-622abaa1-264f-4476-8fb5-acbc0c816b3f for instance with vm_state active and task_state None.#033[00m Feb 23 04:56:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:56:03 localhost systemd[1]: tmp-crun.nNglKM.mount: Deactivated successfully. Feb 23 04:56:03 localhost podman[311982]: 2026-02-23 09:56:03.997784749 +0000 UTC m=+0.071104204 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:56:04 localhost podman[311982]: 2026-02-23 09:56:04.04885062 +0000 UTC m=+0.122169985 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:56:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:56:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3455665244' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:56:04 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:56:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:56:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3455665244' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:56:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:56:05 Feb 23 04:56:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:56:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 04:56:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['manila_metadata', 'volumes', 'manila_data', 'vms', '.mgr', 'images', 'backups'] Feb 23 04:56:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 04:56:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:56:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:56:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 225 MiB data, 916 MiB used, 41 GiB / 42 GiB avail; 478 KiB/s rd, 3.1 MiB/s wr, 92 op/s Feb 23 04:56:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:56:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:56:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:56:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:56:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:56:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:56:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:56:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:56:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006573666945840313 of space, bias 1.0, pg target 1.3147333891680626 quantized to 32 (current 32) Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:56:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019465818676716918 quantized to 16 (current 16) Feb 23 04:56:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:56:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:56:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:56:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:56:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:56:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:06 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:06.198 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:05Z, description=, device_id=df0c3d32-6b50-430c-9349-60c5267f0162, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=89c65d03-74bb-42f8-a297-42c37f58b063, ip_allocation=immediate, mac_address=fa:16:3e:c7:a2:31, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:55:57Z, description=, dns_domain=, id=f53aefcd-9a58-44f1-8f89-81005c5482a8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-DeleteServersTestJSON-1080093654-network, port_security_enabled=True, project_id=5dde32a00eb94a4cad27787f37a31b50, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55483, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1235, status=ACTIVE, subnets=['7dc9d698-e415-4121-a27d-3103e25c651b'], tags=[], tenant_id=5dde32a00eb94a4cad27787f37a31b50, updated_at=2026-02-23T09:55:59Z, vlan_transparent=None, network_id=f53aefcd-9a58-44f1-8f89-81005c5482a8, port_security_enabled=False, project_id=5dde32a00eb94a4cad27787f37a31b50, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1294, status=DOWN, tags=[], tenant_id=5dde32a00eb94a4cad27787f37a31b50, updated_at=2026-02-23T09:56:05Z on network f53aefcd-9a58-44f1-8f89-81005c5482a8#033[00m Feb 23 04:56:06 localhost dnsmasq[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/addn_hosts - 1 addresses Feb 23 04:56:06 localhost dnsmasq-dhcp[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/host Feb 23 04:56:06 localhost dnsmasq-dhcp[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/opts Feb 23 04:56:06 localhost podman[312025]: 2026-02-23 09:56:06.383498797 +0000 UTC m=+0.044904884 container kill 422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f53aefcd-9a58-44f1-8f89-81005c5482a8, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 04:56:06 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:06.701 263679 INFO neutron.agent.dhcp.agent [None req-79622dd0-e580-4951-9471-b111743800f7 - - - - - -] DHCP configuration for ports {'89c65d03-74bb-42f8-a297-42c37f58b063'} is completed#033[00m Feb 23 04:56:06 localhost systemd[1]: Stopping User Manager for UID 42436... Feb 23 04:56:06 localhost systemd[311396]: Activating special unit Exit the Session... Feb 23 04:56:06 localhost systemd[311396]: Stopped target Main User Target. Feb 23 04:56:06 localhost systemd[311396]: Stopped target Basic System. Feb 23 04:56:06 localhost systemd[311396]: Stopped target Paths. Feb 23 04:56:06 localhost systemd[311396]: Stopped target Sockets. Feb 23 04:56:06 localhost systemd[311396]: Stopped target Timers. Feb 23 04:56:06 localhost systemd[311396]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 23 04:56:06 localhost systemd[311396]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 04:56:06 localhost systemd[311396]: Closed D-Bus User Message Bus Socket. Feb 23 04:56:06 localhost systemd[311396]: Stopped Create User's Volatile Files and Directories. Feb 23 04:56:06 localhost systemd[311396]: Removed slice User Application Slice. Feb 23 04:56:06 localhost systemd[311396]: Reached target Shutdown. Feb 23 04:56:06 localhost systemd[311396]: Finished Exit the Session. Feb 23 04:56:06 localhost systemd[311396]: Reached target Exit the Session. Feb 23 04:56:06 localhost systemd[1]: user@42436.service: Deactivated successfully. Feb 23 04:56:06 localhost systemd[1]: Stopped User Manager for UID 42436. Feb 23 04:56:06 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Feb 23 04:56:06 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Feb 23 04:56:06 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Feb 23 04:56:06 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Feb 23 04:56:06 localhost systemd[1]: Removed slice User Slice of UID 42436. Feb 23 04:56:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 225 MiB data, 917 MiB used, 41 GiB / 42 GiB avail; 391 KiB/s rd, 2.6 MiB/s wr, 75 op/s Feb 23 04:56:07 localhost nova_compute[280321]: 2026-02-23 09:56:07.308 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:07 localhost nova_compute[280321]: 2026-02-23 09:56:07.591 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.199 280325 DEBUG oslo_concurrency.lockutils [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Acquiring lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.200 280325 DEBUG oslo_concurrency.lockutils [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.200 280325 DEBUG oslo_concurrency.lockutils [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Acquiring lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.200 280325 DEBUG oslo_concurrency.lockutils [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.201 280325 DEBUG oslo_concurrency.lockutils [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.202 280325 INFO nova.compute.manager [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Terminating instance#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.203 280325 DEBUG nova.compute.manager [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Feb 23 04:56:08 localhost kernel: device tap622abaa1-26 left promiscuous mode Feb 23 04:56:08 localhost NetworkManager[5987]: [1771840568.2778] device (tap622abaa1-26): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Feb 23 04:56:08 localhost ovn_controller[155966]: 2026-02-23T09:56:08Z|00162|binding|INFO|Releasing lport 622abaa1-264f-4476-8fb5-acbc0c816b3f from this chassis (sb_readonly=0) Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.352 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:08 localhost ovn_controller[155966]: 2026-02-23T09:56:08Z|00163|binding|INFO|Setting lport 622abaa1-264f-4476-8fb5-acbc0c816b3f down in Southbound Feb 23 04:56:08 localhost ovn_controller[155966]: 2026-02-23T09:56:08Z|00164|binding|INFO|Releasing lport adb72f75-7429-4a32-becc-fdcd334c7c29 from this chassis (sb_readonly=0) Feb 23 04:56:08 localhost ovn_controller[155966]: 2026-02-23T09:56:08Z|00165|binding|INFO|Setting lport adb72f75-7429-4a32-becc-fdcd334c7c29 down in Southbound Feb 23 04:56:08 localhost ovn_controller[155966]: 2026-02-23T09:56:08Z|00166|binding|INFO|Removing iface tap622abaa1-26 ovn-installed in OVS Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.355 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:08 localhost ovn_controller[155966]: 2026-02-23T09:56:08Z|00167|binding|INFO|Releasing lport 0a17f90d-6861-4307-9c4d-59966b232581 from this chassis (sb_readonly=0) Feb 23 04:56:08 localhost ovn_controller[155966]: 2026-02-23T09:56:08Z|00168|binding|INFO|Releasing lport b411e7e0-662a-45b2-a6f4-7c955d5b0de7 from this chassis (sb_readonly=0) Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.362 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:7a:38 19.80.0.148'], port_security=['fa:16:3e:c9:7a:38 19.80.0.148'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['622abaa1-264f-4476-8fb5-acbc0c816b3f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-2088636283', 'neutron:cidrs': '19.80.0.148/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-2088636283', 'neutron:project_id': '2ac6a6009ea84eb99f60bd242e459002', 'neutron:revision_number': '5', 'neutron:security_group_ids': '917bfa8c-752a-4a55-9acc-5ce6144207b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=10f6c850-d5b0-4b68-95e9-d2dc898e2718, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=adb72f75-7429-4a32-becc-fdcd334c7c29) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:08 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully. Feb 23 04:56:08 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 3.379s CPU time. Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.365 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:96:92 10.100.0.13'], port_security=['fa:16:3e:77:96:92 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-967314267', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '66c5eac8-f6f4-40ae-b09f-54e200c103b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-967314267', 'neutron:project_id': '2ac6a6009ea84eb99f60bd242e459002', 'neutron:revision_number': '12', 'neutron:security_group_ids': '917bfa8c-752a-4a55-9acc-5ce6144207b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82ecfbd-c671-4216-ac11-086490c80ba6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=622abaa1-264f-4476-8fb5-acbc0c816b3f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.367 161842 INFO neutron.agent.ovn.metadata.agent [-] Port adb72f75-7429-4a32-becc-fdcd334c7c29 in datapath 9eb5761d-94a8-4798-bed6-a9e5cf6518df unbound from our chassis#033[00m Feb 23 04:56:08 localhost systemd-machined[205673]: Machine qemu-4-instance-0000000b terminated. Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.373 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port 83869ba6-ac91-4ad2-af02-0a920414fa9f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.373 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.374 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9eb5761d-94a8-4798-bed6-a9e5cf6518df, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.375 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[1579b808-80ed-4d9a-9592-311f1b80d7fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.376 161842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df namespace which is not needed anymore#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.437 280325 INFO nova.virt.libvirt.driver [-] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Instance destroyed successfully.#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.439 280325 DEBUG nova.objects.instance [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Lazy-loading 'resources' on Instance uuid 66c5eac8-f6f4-40ae-b09f-54e200c103b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.455 280325 DEBUG nova.virt.libvirt.vif [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-23T09:55:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1016549855',display_name='tempest-LiveMigrationTest-server-1016549855',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(6),hidden=False,host='np0005626465.localdomain',hostname='tempest-livemigrationtest-server-1016549855',id=11,image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-23T09:55:47Z,launched_on='np0005626466.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005626465.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='2ac6a6009ea84eb99f60bd242e459002',ramdisk_id='',reservation_id='r-piujj5se',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d08f8876-d97b-493b-b16b-caf91668eecb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1138379142',owner_user_name='tempest-LiveMigrationTest-1138379142-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2026-02-23T09:56:00Z,user_data=None,user_id='6d15d44765db469a9e04a32fb56dcff2',uuid=66c5eac8-f6f4-40ae-b09f-54e200c103b8,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "address": "fa:16:3e:77:96:92", "network": {"id": "488344bb-b2b1-4b3f-933b-1a9bfdff1d5c", "bridge": "br-int", "label": "tempest-LiveMigrationTest-664573168-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "2ac6a6009ea84eb99f60bd242e459002", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap622abaa1-26", "ovs_interfaceid": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.456 280325 DEBUG nova.network.os_vif_util [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Converting VIF {"id": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "address": "fa:16:3e:77:96:92", "network": {"id": "488344bb-b2b1-4b3f-933b-1a9bfdff1d5c", "bridge": "br-int", "label": "tempest-LiveMigrationTest-664573168-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "2ac6a6009ea84eb99f60bd242e459002", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap622abaa1-26", "ovs_interfaceid": "622abaa1-264f-4476-8fb5-acbc0c816b3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.457 280325 DEBUG nova.network.os_vif_util [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:96:92,bridge_name='br-int',has_traffic_filtering=True,id=622abaa1-264f-4476-8fb5-acbc0c816b3f,network=Network(488344bb-b2b1-4b3f-933b-1a9bfdff1d5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap622abaa1-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.458 280325 DEBUG os_vif [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:96:92,bridge_name='br-int',has_traffic_filtering=True,id=622abaa1-264f-4476-8fb5-acbc0c816b3f,network=Network(488344bb-b2b1-4b3f-933b-1a9bfdff1d5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap622abaa1-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.460 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.461 280325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap622abaa1-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.464 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.467 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.471 280325 INFO os_vif [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:96:92,bridge_name='br-int',has_traffic_filtering=True,id=622abaa1-264f-4476-8fb5-acbc0c816b3f,network=Network(488344bb-b2b1-4b3f-933b-1a9bfdff1d5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap622abaa1-26')#033[00m Feb 23 04:56:08 localhost systemd[1]: tmp-crun.oAF7Cw.mount: Deactivated successfully. Feb 23 04:56:08 localhost neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df[311637]: [NOTICE] (311641) : haproxy version is 2.8.14-c23fe91 Feb 23 04:56:08 localhost neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df[311637]: [NOTICE] (311641) : path to executable is /usr/sbin/haproxy Feb 23 04:56:08 localhost neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df[311637]: [WARNING] (311641) : Exiting Master process... Feb 23 04:56:08 localhost neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df[311637]: [ALERT] (311641) : Current worker (311643) exited with code 143 (Terminated) Feb 23 04:56:08 localhost neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df[311637]: [WARNING] (311641) : All workers exited. Exiting... (0) Feb 23 04:56:08 localhost systemd[1]: libpod-b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc.scope: Deactivated successfully. Feb 23 04:56:08 localhost podman[312093]: 2026-02-23 09:56:08.576279467 +0000 UTC m=+0.073189029 container died b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:08 localhost podman[312093]: 2026-02-23 09:56:08.609913355 +0000 UTC m=+0.106822867 container cleanup b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:56:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:08.640 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:05Z, description=, device_id=df0c3d32-6b50-430c-9349-60c5267f0162, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=89c65d03-74bb-42f8-a297-42c37f58b063, ip_allocation=immediate, mac_address=fa:16:3e:c7:a2:31, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:55:57Z, description=, dns_domain=, id=f53aefcd-9a58-44f1-8f89-81005c5482a8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-DeleteServersTestJSON-1080093654-network, port_security_enabled=True, project_id=5dde32a00eb94a4cad27787f37a31b50, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55483, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1235, status=ACTIVE, subnets=['7dc9d698-e415-4121-a27d-3103e25c651b'], tags=[], tenant_id=5dde32a00eb94a4cad27787f37a31b50, updated_at=2026-02-23T09:55:59Z, vlan_transparent=None, network_id=f53aefcd-9a58-44f1-8f89-81005c5482a8, port_security_enabled=False, project_id=5dde32a00eb94a4cad27787f37a31b50, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1294, status=DOWN, tags=[], tenant_id=5dde32a00eb94a4cad27787f37a31b50, updated_at=2026-02-23T09:56:05Z on network f53aefcd-9a58-44f1-8f89-81005c5482a8#033[00m Feb 23 04:56:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:08.646 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:08Z, description=, device_id=247de523-cb1c-4082-bf55-2e2cb857d62a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1ba94e57-91f5-4032-a09e-85c05cfca2bb, ip_allocation=immediate, mac_address=fa:16:3e:ed:bf:4d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:55:57Z, description=, dns_domain=, id=0c264690-f25c-4167-b007-0eb8e2043cc5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-385573214-network, port_security_enabled=True, project_id=aef0e3b7cdf44eda8494e7e5c41be9f9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39985, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1234, status=ACTIVE, subnets=['adadcfc6-f205-4f8a-a84f-276f71dafa67'], tags=[], tenant_id=aef0e3b7cdf44eda8494e7e5c41be9f9, updated_at=2026-02-23T09:55:59Z, vlan_transparent=None, network_id=0c264690-f25c-4167-b007-0eb8e2043cc5, port_security_enabled=False, project_id=aef0e3b7cdf44eda8494e7e5c41be9f9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1305, status=DOWN, tags=[], tenant_id=aef0e3b7cdf44eda8494e7e5c41be9f9, updated_at=2026-02-23T09:56:08Z on network 0c264690-f25c-4167-b007-0eb8e2043cc5#033[00m Feb 23 04:56:08 localhost podman[312110]: 2026-02-23 09:56:08.673005483 +0000 UTC m=+0.086184675 container cleanup b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:56:08 localhost systemd[1]: libpod-conmon-b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc.scope: Deactivated successfully. Feb 23 04:56:08 localhost podman[312123]: 2026-02-23 09:56:08.765796329 +0000 UTC m=+0.138163184 container remove b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.769 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd40b35-3864-43e1-b833-af063b0d0015]: (4, ('Mon Feb 23 09:56:08 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df (b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc)\nb7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc\nMon Feb 23 09:56:08 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df (b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc)\nb7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.771 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1f07df-1c21-467e-b5c0-3c0172c95537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.772 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9eb5761d-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.774 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:08 localhost kernel: device tap9eb5761d-90 left promiscuous mode Feb 23 04:56:08 localhost nova_compute[280321]: 2026-02-23 09:56:08.780 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.785 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[38da08b8-c870-4413-b5ce-33f3a4347c5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.801 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[00f3d5a2-bdcf-4d5b-9f20-21c219acfe45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.803 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3a7f97-8d92-4cc2-84e9-b413830ec1cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.815 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[606008be-c19e-4be9-b2f5-87c9e91f39d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1198861, 'reachable_time': 33308, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312175, 'error': None, 'target': 'ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.820 161946 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9eb5761d-94a8-4798-bed6-a9e5cf6518df deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.820 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[fc14a85e-a4a5-4635-8758-8db529e1c644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.821 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 622abaa1-264f-4476-8fb5-acbc0c816b3f in datapath 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c unbound from our chassis#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.824 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port 356a7bb3-8843-4d5f-9f1c-bae2fad82cfb IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.825 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.825 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[08b3c956-fd3c-4d03-85b7-e65240cf4303]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:08.826 161842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c namespace which is not needed anymore#033[00m Feb 23 04:56:08 localhost dnsmasq[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/addn_hosts - 1 addresses Feb 23 04:56:08 localhost podman[312174]: 2026-02-23 09:56:08.866806728 +0000 UTC m=+0.053349163 container kill 422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f53aefcd-9a58-44f1-8f89-81005c5482a8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:08 localhost dnsmasq-dhcp[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/host Feb 23 04:56:08 localhost dnsmasq-dhcp[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/opts Feb 23 04:56:08 localhost dnsmasq[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/addn_hosts - 1 addresses Feb 23 04:56:08 localhost dnsmasq-dhcp[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/host Feb 23 04:56:08 localhost dnsmasq-dhcp[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/opts Feb 23 04:56:08 localhost podman[312188]: 2026-02-23 09:56:08.889052707 +0000 UTC m=+0.046541623 container kill dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c264690-f25c-4167-b007-0eb8e2043cc5, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:56:09 localhost neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c[311806]: [NOTICE] (311810) : haproxy version is 2.8.14-c23fe91 Feb 23 04:56:09 localhost neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c[311806]: [NOTICE] (311810) : path to executable is /usr/sbin/haproxy Feb 23 04:56:09 localhost neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c[311806]: [WARNING] (311810) : Exiting Master process... Feb 23 04:56:09 localhost neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c[311806]: [ALERT] (311810) : Current worker (311812) exited with code 143 (Terminated) Feb 23 04:56:09 localhost neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c[311806]: [WARNING] (311810) : All workers exited. Exiting... (0) Feb 23 04:56:09 localhost systemd[1]: libpod-6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef.scope: Deactivated successfully. Feb 23 04:56:09 localhost podman[312230]: 2026-02-23 09:56:09.031418139 +0000 UTC m=+0.059242802 container died 6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:56:09 localhost podman[312230]: 2026-02-23 09:56:09.064730747 +0000 UTC m=+0.092555410 container cleanup 6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 04:56:09 localhost podman[312250]: 2026-02-23 09:56:09.089588077 +0000 UTC m=+0.054508726 container cleanup 6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:09 localhost systemd[1]: libpod-conmon-6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef.scope: Deactivated successfully. Feb 23 04:56:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:09.109 263679 INFO neutron.agent.dhcp.agent [None req-6e36ad9b-f5f5-4755-a703-bd1f9135da7e - - - - - -] DHCP configuration for ports {'89c65d03-74bb-42f8-a297-42c37f58b063', '1ba94e57-91f5-4032-a09e-85c05cfca2bb'} is completed#033[00m Feb 23 04:56:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 225 MiB data, 917 MiB used, 41 GiB / 42 GiB avail; 391 KiB/s rd, 2.6 MiB/s wr, 75 op/s Feb 23 04:56:09 localhost podman[312266]: 2026-02-23 09:56:09.13382553 +0000 UTC m=+0.054193658 container remove 6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.137 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[f1134db9-d574-49f8-9fbe-d3951caa4f1d]: (4, ('Mon Feb 23 09:56:08 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c (6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef)\n6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef\nMon Feb 23 09:56:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c (6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef)\n6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.138 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[64eeb96e-d1a6-45a2-a589-4258dc7d176a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.139 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap488344bb-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:56:09 localhost kernel: device tap488344bb-b0 left promiscuous mode Feb 23 04:56:09 localhost nova_compute[280321]: 2026-02-23 09:56:09.152 280325 INFO nova.virt.libvirt.driver [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Deleting instance files /var/lib/nova/instances/66c5eac8-f6f4-40ae-b09f-54e200c103b8_del#033[00m Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.151 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff7e29f-8c57-4d15-a839-5f02f8a2b823]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:09 localhost nova_compute[280321]: 2026-02-23 09:56:09.153 280325 INFO nova.virt.libvirt.driver [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Deletion of /var/lib/nova/instances/66c5eac8-f6f4-40ae-b09f-54e200c103b8_del complete#033[00m Feb 23 04:56:09 localhost nova_compute[280321]: 2026-02-23 09:56:09.158 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.167 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[0d835b52-4b91-4372-98c8-559ac629f20f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.169 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[c49f9037-466d-4edf-b4f8-b60ad34b8ce8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.181 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[631d0ed9-205d-4fa2-ad8e-ed6351da71de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1198950, 'reachable_time': 15971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312283, 'error': None, 'target': 'ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.183 161946 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.183 161946 DEBUG oslo.privsep.daemon [-] privsep: reply[71b07c6a-7d0e-4cb8-bb48-e2e8b2b4adc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:09 localhost nova_compute[280321]: 2026-02-23 09:56:09.207 280325 INFO nova.compute.manager [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m Feb 23 04:56:09 localhost nova_compute[280321]: 2026-02-23 09:56:09.208 280325 DEBUG oslo.service.loopingcall [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Feb 23 04:56:09 localhost nova_compute[280321]: 2026-02-23 09:56:09.208 280325 DEBUG nova.compute.manager [-] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Feb 23 04:56:09 localhost nova_compute[280321]: 2026-02-23 09:56:09.208 280325 DEBUG nova.network.neutron [-] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.557 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:09 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:09.558 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:56:09 localhost systemd[1]: var-lib-containers-storage-overlay-ef89a0a5be39a134383005089c288672f4796afcc02dbbe5d299744085f3be35-merged.mount: Deactivated successfully. Feb 23 04:56:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6af09bc131dd3ec7999191020f4ff56e56d675e1109c8ebe0ea00fd53f0b13ef-userdata-shm.mount: Deactivated successfully. Feb 23 04:56:09 localhost systemd[1]: run-netns-ovnmeta\x2d488344bb\x2db2b1\x2d4b3f\x2d933b\x2d1a9bfdff1d5c.mount: Deactivated successfully. Feb 23 04:56:09 localhost systemd[1]: var-lib-containers-storage-overlay-59217ff29d8a8785d9ddf2ca400e1f0d79f1ada9c19a095456e7d12faf183037-merged.mount: Deactivated successfully. Feb 23 04:56:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7441fbc814b3da4274fb177248d6001ad45587555d7a35709abcaf4eed6f1bc-userdata-shm.mount: Deactivated successfully. Feb 23 04:56:09 localhost systemd[1]: run-netns-ovnmeta\x2d9eb5761d\x2d94a8\x2d4798\x2dbed6\x2da9e5cf6518df.mount: Deactivated successfully. Feb 23 04:56:09 localhost nova_compute[280321]: 2026-02-23 09:56:09.586 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:10 localhost nova_compute[280321]: 2026-02-23 09:56:10.895 280325 DEBUG nova.compute.manager [req-51668220-1635-4dd9-9d0f-bcfd04d618af req-5c696f5e-6b5f-4413-b762-f107643cdc87 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Received event network-vif-unplugged-622abaa1-264f-4476-8fb5-acbc0c816b3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:56:10 localhost nova_compute[280321]: 2026-02-23 09:56:10.895 280325 DEBUG oslo_concurrency.lockutils [req-51668220-1635-4dd9-9d0f-bcfd04d618af req-5c696f5e-6b5f-4413-b762-f107643cdc87 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:56:10 localhost nova_compute[280321]: 2026-02-23 09:56:10.896 280325 DEBUG oslo_concurrency.lockutils [req-51668220-1635-4dd9-9d0f-bcfd04d618af req-5c696f5e-6b5f-4413-b762-f107643cdc87 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:56:10 localhost nova_compute[280321]: 2026-02-23 09:56:10.896 280325 DEBUG oslo_concurrency.lockutils [req-51668220-1635-4dd9-9d0f-bcfd04d618af req-5c696f5e-6b5f-4413-b762-f107643cdc87 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:56:10 localhost nova_compute[280321]: 2026-02-23 09:56:10.897 280325 DEBUG nova.compute.manager [req-51668220-1635-4dd9-9d0f-bcfd04d618af req-5c696f5e-6b5f-4413-b762-f107643cdc87 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] No waiting events found dispatching network-vif-unplugged-622abaa1-264f-4476-8fb5-acbc0c816b3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 23 04:56:10 localhost nova_compute[280321]: 2026-02-23 09:56:10.897 280325 DEBUG nova.compute.manager [req-51668220-1635-4dd9-9d0f-bcfd04d618af req-5c696f5e-6b5f-4413-b762-f107643cdc87 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Received event network-vif-unplugged-622abaa1-264f-4476-8fb5-acbc0c816b3f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Feb 23 04:56:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 190 MiB data, 839 MiB used, 41 GiB / 42 GiB avail; 206 KiB/s rd, 1.3 MiB/s wr, 54 op/s Feb 23 04:56:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:56:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:56:11 localhost podman[312284]: 2026-02-23 09:56:11.998421395 +0000 UTC m=+0.074566320 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent) Feb 23 04:56:12 localhost podman[312284]: 2026-02-23 09:56:12.031652401 +0000 UTC m=+0.107797316 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216) Feb 23 04:56:12 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:56:12 localhost podman[312285]: 2026-02-23 09:56:12.125717217 +0000 UTC m=+0.194551229 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:56:12 localhost podman[312285]: 2026-02-23 09:56:12.164905425 +0000 UTC m=+0.233739447 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2) Feb 23 04:56:12 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:56:12 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:12.463 263679 INFO neutron.agent.linux.ip_lib [None req-fa1f8f23-e647-4d9b-853c-f53a26f164e5 - - - - - -] Device tap7d543d78-81 cannot be used as it has no MAC address#033[00m Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.486 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:12 localhost kernel: device tap7d543d78-81 entered promiscuous mode Feb 23 04:56:12 localhost NetworkManager[5987]: [1771840572.4920] manager: (tap7d543d78-81): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Feb 23 04:56:12 localhost ovn_controller[155966]: 2026-02-23T09:56:12Z|00169|binding|INFO|Claiming lport 7d543d78-81f8-4a53-a1b1-65eb280252d1 for this chassis. Feb 23 04:56:12 localhost ovn_controller[155966]: 2026-02-23T09:56:12Z|00170|binding|INFO|7d543d78-81f8-4a53-a1b1-65eb280252d1: Claiming unknown Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.494 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:12 localhost systemd-udevd[312331]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:56:12 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:12.503 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-c192d327-fd98-4512-92b0-8fe3acdfd05b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c192d327-fd98-4512-92b0-8fe3acdfd05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2f9492758b148768734fafb039e58db', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4522ddbd-f343-4eb2-9faa-450e0e2e5c4b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7d543d78-81f8-4a53-a1b1-65eb280252d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:12 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:12.504 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 7d543d78-81f8-4a53-a1b1-65eb280252d1 in datapath c192d327-fd98-4512-92b0-8fe3acdfd05b bound to our chassis#033[00m Feb 23 04:56:12 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:12.505 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c192d327-fd98-4512-92b0-8fe3acdfd05b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:56:12 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:12.506 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a73261-bb3a-4683-9aef-580ac79fe7ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:12 localhost journal[229268]: ethtool ioctl error on tap7d543d78-81: No such device Feb 23 04:56:12 localhost journal[229268]: ethtool ioctl error on tap7d543d78-81: No such device Feb 23 04:56:12 localhost ovn_controller[155966]: 2026-02-23T09:56:12Z|00171|binding|INFO|Setting lport 7d543d78-81f8-4a53-a1b1-65eb280252d1 ovn-installed in OVS Feb 23 04:56:12 localhost ovn_controller[155966]: 2026-02-23T09:56:12Z|00172|binding|INFO|Setting lport 7d543d78-81f8-4a53-a1b1-65eb280252d1 up in Southbound Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.530 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.531 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:12 localhost journal[229268]: ethtool ioctl error on tap7d543d78-81: No such device Feb 23 04:56:12 localhost journal[229268]: ethtool ioctl error on tap7d543d78-81: No such device Feb 23 04:56:12 localhost journal[229268]: ethtool ioctl error on tap7d543d78-81: No such device Feb 23 04:56:12 localhost journal[229268]: ethtool ioctl error on tap7d543d78-81: No such device Feb 23 04:56:12 localhost journal[229268]: ethtool ioctl error on tap7d543d78-81: No such device Feb 23 04:56:12 localhost journal[229268]: ethtool ioctl error on tap7d543d78-81: No such device Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.564 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.588 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.592 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:12 localhost podman[241086]: time="2026-02-23T09:56:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:56:12 localhost podman[241086]: @ - - [23/Feb/2026:09:56:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 163180 "" "Go-http-client/1.1" Feb 23 04:56:12 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:12.733 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:08Z, description=, device_id=247de523-cb1c-4082-bf55-2e2cb857d62a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1ba94e57-91f5-4032-a09e-85c05cfca2bb, ip_allocation=immediate, mac_address=fa:16:3e:ed:bf:4d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:55:57Z, description=, dns_domain=, id=0c264690-f25c-4167-b007-0eb8e2043cc5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-385573214-network, port_security_enabled=True, project_id=aef0e3b7cdf44eda8494e7e5c41be9f9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39985, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1234, status=ACTIVE, subnets=['adadcfc6-f205-4f8a-a84f-276f71dafa67'], tags=[], tenant_id=aef0e3b7cdf44eda8494e7e5c41be9f9, updated_at=2026-02-23T09:55:59Z, vlan_transparent=None, network_id=0c264690-f25c-4167-b007-0eb8e2043cc5, port_security_enabled=False, project_id=aef0e3b7cdf44eda8494e7e5c41be9f9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1305, status=DOWN, tags=[], tenant_id=aef0e3b7cdf44eda8494e7e5c41be9f9, updated_at=2026-02-23T09:56:08Z on network 0c264690-f25c-4167-b007-0eb8e2043cc5#033[00m Feb 23 04:56:12 localhost podman[241086]: @ - - [23/Feb/2026:09:56:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20188 "" "Go-http-client/1.1" Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.762 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:12 localhost dnsmasq[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/addn_hosts - 1 addresses Feb 23 04:56:12 localhost dnsmasq-dhcp[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/host Feb 23 04:56:12 localhost podman[312389]: 2026-02-23 09:56:12.908804294 +0000 UTC m=+0.044526212 container kill dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c264690-f25c-4167-b007-0eb8e2043cc5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:56:12 localhost dnsmasq-dhcp[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/opts Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.961 280325 DEBUG nova.compute.manager [req-8fe18a09-1239-41ff-9f81-fad7e3245665 req-a9b30932-b4b4-4b3c-9fa7-f92c0a68e512 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Received event network-vif-plugged-622abaa1-264f-4476-8fb5-acbc0c816b3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.961 280325 DEBUG oslo_concurrency.lockutils [req-8fe18a09-1239-41ff-9f81-fad7e3245665 req-a9b30932-b4b4-4b3c-9fa7-f92c0a68e512 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Acquiring lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.962 280325 DEBUG oslo_concurrency.lockutils [req-8fe18a09-1239-41ff-9f81-fad7e3245665 req-a9b30932-b4b4-4b3c-9fa7-f92c0a68e512 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.962 280325 DEBUG oslo_concurrency.lockutils [req-8fe18a09-1239-41ff-9f81-fad7e3245665 req-a9b30932-b4b4-4b3c-9fa7-f92c0a68e512 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.963 280325 DEBUG nova.compute.manager [req-8fe18a09-1239-41ff-9f81-fad7e3245665 req-a9b30932-b4b4-4b3c-9fa7-f92c0a68e512 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] No waiting events found dispatching network-vif-plugged-622abaa1-264f-4476-8fb5-acbc0c816b3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 23 04:56:12 localhost nova_compute[280321]: 2026-02-23 09:56:12.963 280325 WARNING nova.compute.manager [req-8fe18a09-1239-41ff-9f81-fad7e3245665 req-a9b30932-b4b4-4b3c-9fa7-f92c0a68e512 422ccb105b4e4d80bb6030ca202e94d2 d351b5d019cd497ab1d84160f10b653c - - default default] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Received unexpected event network-vif-plugged-622abaa1-264f-4476-8fb5-acbc0c816b3f for instance with vm_state active and task_state deleting.#033[00m Feb 23 04:56:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 30 op/s Feb 23 04:56:13 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:13.161 263679 INFO neutron.agent.dhcp.agent [None req-d5f880ac-18fb-4ad6-b571-3632b963abb5 - - - - - -] DHCP configuration for ports {'1ba94e57-91f5-4032-a09e-85c05cfca2bb'} is completed#033[00m Feb 23 04:56:13 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:13.429 2 INFO neutron.agent.securitygroups_rpc [None req-d12b9e97-0a30-4af0-bab2-d9a3d950dae1 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:13 localhost nova_compute[280321]: 2026-02-23 09:56:13.464 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:13 localhost podman[312440]: Feb 23 04:56:13 localhost podman[312440]: 2026-02-23 09:56:13.475380294 +0000 UTC m=+0.112550632 container create f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 04:56:13 localhost systemd[1]: Started libpod-conmon-f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2.scope. Feb 23 04:56:13 localhost systemd[1]: Started libcrun container. Feb 23 04:56:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823dfc6087fe7122fa654e4b50271e41d11a97e11824798d8a6c41b3221df019/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:56:13 localhost podman[312440]: 2026-02-23 09:56:13.539617158 +0000 UTC m=+0.176787506 container init f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:56:13 localhost podman[312440]: 2026-02-23 09:56:13.442101137 +0000 UTC m=+0.079271465 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:56:13 localhost podman[312440]: 2026-02-23 09:56:13.545935931 +0000 UTC m=+0.183106279 container start f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:56:13 localhost dnsmasq[312459]: started, version 2.85 cachesize 150 Feb 23 04:56:13 localhost dnsmasq[312459]: DNS service limited to local subnets Feb 23 04:56:13 localhost dnsmasq[312459]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:56:13 localhost dnsmasq[312459]: warning: no upstream servers configured Feb 23 04:56:13 localhost dnsmasq-dhcp[312459]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:56:13 localhost dnsmasq[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/addn_hosts - 0 addresses Feb 23 04:56:13 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/host Feb 23 04:56:13 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/opts Feb 23 04:56:13 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:13.719 263679 INFO neutron.agent.dhcp.agent [None req-e205d84d-81b3-49f7-8ece-2917e4383266 - - - - - -] DHCP configuration for ports {'a245fefb-a5eb-4daf-86e7-50caae980c2c'} is completed#033[00m Feb 23 04:56:14 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:14.159 2 INFO neutron.agent.securitygroups_rpc [None req-2b12f6d0-e3fd-4fa1-a330-93a66177eb38 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:14 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:14.326 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:55:28Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=622abaa1-264f-4476-8fb5-acbc0c816b3f, ip_allocation=immediate, mac_address=fa:16:3e:77:96:92, name=tempest-parent-967314267, network_id=488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, port_security_enabled=True, project_id=2ac6a6009ea84eb99f60bd242e459002, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=15, security_groups=['917bfa8c-752a-4a55-9acc-5ce6144207b4'], standard_attr_id=1024, status=DOWN, tags=[], tenant_id=2ac6a6009ea84eb99f60bd242e459002, trunk_details=sub_ports=[], trunk_id=ed92871f-12d0-4d3c-b546-02c80c8c3d6e, updated_at=2026-02-23T09:56:12Z on network 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c#033[00m Feb 23 04:56:14 localhost dnsmasq[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/addn_hosts - 2 addresses Feb 23 04:56:14 localhost podman[312477]: 2026-02-23 09:56:14.531370424 +0000 UTC m=+0.056110456 container kill d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:56:14 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/host Feb 23 04:56:14 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/opts Feb 23 04:56:14 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:14.690 2 INFO neutron.agent.securitygroups_rpc [None req-313a7d3e-1b0f-4380-96f5-be20bc42956f 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:14 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:14.743 263679 INFO neutron.agent.dhcp.agent [None req-159174de-b6e1-45e4-909c-2df0e82ef7ad - - - - - -] DHCP configuration for ports {'622abaa1-264f-4476-8fb5-acbc0c816b3f'} is completed#033[00m Feb 23 04:56:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s Feb 23 04:56:15 localhost nova_compute[280321]: 2026-02-23 09:56:15.356 280325 DEBUG nova.network.neutron [-] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 23 04:56:15 localhost nova_compute[280321]: 2026-02-23 09:56:15.375 280325 INFO nova.compute.manager [-] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Took 6.17 seconds to deallocate network for instance.#033[00m Feb 23 04:56:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:15 localhost nova_compute[280321]: 2026-02-23 09:56:15.412 280325 DEBUG oslo_concurrency.lockutils [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:56:15 localhost nova_compute[280321]: 2026-02-23 09:56:15.412 280325 DEBUG oslo_concurrency.lockutils [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:56:15 localhost nova_compute[280321]: 2026-02-23 09:56:15.414 280325 DEBUG oslo_concurrency.lockutils [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:56:15 localhost nova_compute[280321]: 2026-02-23 09:56:15.464 280325 INFO nova.scheduler.client.report [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Deleted allocations for instance 66c5eac8-f6f4-40ae-b09f-54e200c103b8#033[00m Feb 23 04:56:15 localhost nova_compute[280321]: 2026-02-23 09:56:15.542 280325 DEBUG oslo_concurrency.lockutils [None req-cf62e510-6cc0-4816-b204-70776024dce9 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Lock "66c5eac8-f6f4-40ae-b09f-54e200c103b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 7.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:56:15 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:15.661 2 INFO neutron.agent.securitygroups_rpc [None req-ab6e9fb7-3784-4829-9f74-5b432c230863 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:16 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:16.419 2 INFO neutron.agent.securitygroups_rpc [None req-f9178f29-327a-4b87-b505-9a750a3f52d0 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:16 localhost dnsmasq[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/addn_hosts - 0 addresses Feb 23 04:56:16 localhost systemd[1]: tmp-crun.sBkUMA.mount: Deactivated successfully. Feb 23 04:56:16 localhost dnsmasq-dhcp[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/host Feb 23 04:56:16 localhost dnsmasq-dhcp[311981]: read /var/lib/neutron/dhcp/f53aefcd-9a58-44f1-8f89-81005c5482a8/opts Feb 23 04:56:16 localhost podman[312514]: 2026-02-23 09:56:16.72927097 +0000 UTC m=+0.064209014 container kill 422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f53aefcd-9a58-44f1-8f89-81005c5482a8, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:56:16 localhost podman[312527]: 2026-02-23 09:56:16.825706989 +0000 UTC m=+0.071926220 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:56:16 localhost podman[312527]: 2026-02-23 09:56:16.838790088 +0000 UTC m=+0.085009269 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:56:16 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:56:16 localhost nova_compute[280321]: 2026-02-23 09:56:16.905 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:16 localhost ovn_controller[155966]: 2026-02-23T09:56:16Z|00173|binding|INFO|Releasing lport 82d1d8ee-c402-4e4b-857d-ac36f1f1e79d from this chassis (sb_readonly=0) Feb 23 04:56:16 localhost ovn_controller[155966]: 2026-02-23T09:56:16Z|00174|binding|INFO|Setting lport 82d1d8ee-c402-4e4b-857d-ac36f1f1e79d down in Southbound Feb 23 04:56:16 localhost kernel: device tap82d1d8ee-c4 left promiscuous mode Feb 23 04:56:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:16.914 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-f53aefcd-9a58-44f1-8f89-81005c5482a8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f53aefcd-9a58-44f1-8f89-81005c5482a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5dde32a00eb94a4cad27787f37a31b50', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d015d5b-281d-4214-975e-0e49d021e34d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=82d1d8ee-c402-4e4b-857d-ac36f1f1e79d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:16.916 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 82d1d8ee-c402-4e4b-857d-ac36f1f1e79d in datapath f53aefcd-9a58-44f1-8f89-81005c5482a8 unbound from our chassis#033[00m Feb 23 04:56:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:16.920 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f53aefcd-9a58-44f1-8f89-81005c5482a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:56:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:16.921 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d529193f-90e6-4c8d-8acc-fd32ce1a2723]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:16 localhost nova_compute[280321]: 2026-02-23 09:56:16.933 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s Feb 23 04:56:17 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:17.378 2 INFO neutron.agent.securitygroups_rpc [None req-9798c79a-b835-452b-b3e7-ba6f51410008 d6a332b0f2e446a68dc6c19280644090 2207de28dcd245d2b198a56e6161001a - - default default] Security group member updated ['ba10f066-a353-4b6a-98b8-dab53422ee14']#033[00m Feb 23 04:56:17 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:17.482 2 INFO neutron.agent.securitygroups_rpc [None req-266d671f-bdaf-4cc0-a88f-fea21e1850b2 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:17 localhost nova_compute[280321]: 2026-02-23 09:56:17.596 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:17 localhost nova_compute[280321]: 2026-02-23 09:56:17.750 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:18 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:18.325 2 INFO neutron.agent.securitygroups_rpc [None req-aa9784b4-8658-4ac5-a544-4839237cb0a4 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Security group member updated ['917bfa8c-752a-4a55-9acc-5ce6144207b4']#033[00m Feb 23 04:56:18 localhost nova_compute[280321]: 2026-02-23 09:56:18.467 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:18 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:18.560 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:56:18 localhost dnsmasq[310573]: read /var/lib/neutron/dhcp/9eb5761d-94a8-4798-bed6-a9e5cf6518df/addn_hosts - 0 addresses Feb 23 04:56:18 localhost dnsmasq-dhcp[310573]: read /var/lib/neutron/dhcp/9eb5761d-94a8-4798-bed6-a9e5cf6518df/host Feb 23 04:56:18 localhost podman[312629]: 2026-02-23 09:56:18.562015474 +0000 UTC m=+0.043908462 container kill 9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9eb5761d-94a8-4798-bed6-a9e5cf6518df, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 04:56:18 localhost dnsmasq-dhcp[310573]: read /var/lib/neutron/dhcp/9eb5761d-94a8-4798-bed6-a9e5cf6518df/opts Feb 23 04:56:18 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:18.734 2 INFO neutron.agent.securitygroups_rpc [None req-40589676-1ef1-47e5-81ec-92f9fb0c6844 d6a332b0f2e446a68dc6c19280644090 2207de28dcd245d2b198a56e6161001a - - default default] Security group member updated ['ba10f066-a353-4b6a-98b8-dab53422ee14']#033[00m Feb 23 04:56:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:56:18 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:56:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:56:18 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:56:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:56:18 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 03f95d8f-1931-4be6-aa13-ffcaae1a026f (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:56:18 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 03f95d8f-1931-4be6-aa13-ffcaae1a026f (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:56:18 localhost ceph-mgr[285904]: [progress INFO root] Completed event 03f95d8f-1931-4be6-aa13-ffcaae1a026f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 04:56:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:56:18 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:56:18 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:56:18 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:56:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s Feb 23 04:56:19 localhost dnsmasq[310573]: exiting on receipt of SIGTERM Feb 23 04:56:19 localhost podman[312700]: 2026-02-23 09:56:19.41960503 +0000 UTC m=+0.071070554 container kill 9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9eb5761d-94a8-4798-bed6-a9e5cf6518df, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:56:19 localhost systemd[1]: libpod-9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7.scope: Deactivated successfully. Feb 23 04:56:19 localhost podman[312714]: 2026-02-23 09:56:19.497605714 +0000 UTC m=+0.057111077 container died 9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9eb5761d-94a8-4798-bed6-a9e5cf6518df, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 04:56:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7-userdata-shm.mount: Deactivated successfully. Feb 23 04:56:19 localhost systemd[1]: var-lib-containers-storage-overlay-8d9384683f823fde6176f594f20ad42025b4abff056b969bab3b55b515854220-merged.mount: Deactivated successfully. Feb 23 04:56:19 localhost podman[312714]: 2026-02-23 09:56:19.601238202 +0000 UTC m=+0.160743495 container remove 9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9eb5761d-94a8-4798-bed6-a9e5cf6518df, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:56:19 localhost systemd[1]: libpod-conmon-9fc636b91cb89a928c025b9567829df756a44f90b3f282a31498560e4de4b1b7.scope: Deactivated successfully. Feb 23 04:56:19 localhost nova_compute[280321]: 2026-02-23 09:56:19.644 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:19 localhost kernel: device tap3f6889f9-47 left promiscuous mode Feb 23 04:56:19 localhost ovn_controller[155966]: 2026-02-23T09:56:19Z|00175|binding|INFO|Releasing lport 3f6889f9-478c-43a9-a43d-d91e4cd588c8 from this chassis (sb_readonly=0) Feb 23 04:56:19 localhost ovn_controller[155966]: 2026-02-23T09:56:19Z|00176|binding|INFO|Setting lport 3f6889f9-478c-43a9-a43d-d91e4cd588c8 down in Southbound Feb 23 04:56:19 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:19.653 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9eb5761d-94a8-4798-bed6-a9e5cf6518df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ac6a6009ea84eb99f60bd242e459002', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10f6c850-d5b0-4b68-95e9-d2dc898e2718, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3f6889f9-478c-43a9-a43d-d91e4cd588c8) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:19 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:19.656 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 3f6889f9-478c-43a9-a43d-d91e4cd588c8 in datapath 9eb5761d-94a8-4798-bed6-a9e5cf6518df unbound from our chassis#033[00m Feb 23 04:56:19 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:19.660 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9eb5761d-94a8-4798-bed6-a9e5cf6518df, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:56:19 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:19.661 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[e0343698-7fbb-4a4f-aec7-16d3fa244cc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:19 localhost nova_compute[280321]: 2026-02-23 09:56:19.664 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:19 localhost nova_compute[280321]: 2026-02-23 09:56:19.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:56:20 localhost systemd[1]: run-netns-qdhcp\x2d9eb5761d\x2d94a8\x2d4798\x2dbed6\x2da9e5cf6518df.mount: Deactivated successfully. Feb 23 04:56:20 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:20.166 263679 INFO neutron.agent.dhcp.agent [None req-1e71ed0c-c54f-4ee3-8624-46d28b8df245 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:20 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:20.168 263679 INFO neutron.agent.dhcp.agent [None req-1e71ed0c-c54f-4ee3-8624-46d28b8df245 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:20 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:56:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:56:20 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:20.664 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:20Z, description=, device_id=c69c2d86-62cf-4b50-baee-7117874f99be, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f1f868ea-4c0f-4e53-a681-5799d8314d7d, ip_allocation=immediate, mac_address=fa:16:3e:89:a3:2b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:56:08Z, description=, dns_domain=, id=c192d327-fd98-4512-92b0-8fe3acdfd05b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-262096085, port_security_enabled=True, project_id=a2f9492758b148768734fafb039e58db, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47611, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1306, status=ACTIVE, subnets=['8456be2d-f237-4c3f-ba6f-3fc45562c792'], tags=[], tenant_id=a2f9492758b148768734fafb039e58db, updated_at=2026-02-23T09:56:11Z, vlan_transparent=None, network_id=c192d327-fd98-4512-92b0-8fe3acdfd05b, port_security_enabled=False, project_id=a2f9492758b148768734fafb039e58db, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1342, status=DOWN, tags=[], tenant_id=a2f9492758b148768734fafb039e58db, updated_at=2026-02-23T09:56:20Z on network c192d327-fd98-4512-92b0-8fe3acdfd05b#033[00m Feb 23 04:56:20 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:20.711 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:20 localhost dnsmasq[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/addn_hosts - 1 addresses Feb 23 04:56:20 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/host Feb 23 04:56:20 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/opts Feb 23 04:56:20 localhost podman[312754]: 2026-02-23 09:56:20.868528041 +0000 UTC m=+0.064803262 container kill f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:56:20 localhost nova_compute[280321]: 2026-02-23 09:56:20.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:56:20 localhost nova_compute[280321]: 2026-02-23 09:56:20.917 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:56:20 localhost nova_compute[280321]: 2026-02-23 09:56:20.918 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:56:20 localhost nova_compute[280321]: 2026-02-23 09:56:20.918 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:56:20 localhost nova_compute[280321]: 2026-02-23 09:56:20.919 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:56:20 localhost nova_compute[280321]: 2026-02-23 09:56:20.919 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:56:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s Feb 23 04:56:21 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:21.164 263679 INFO neutron.agent.dhcp.agent [None req-bc77afaa-d6f1-4874-adfe-be394a8aaec9 - - - - - -] DHCP configuration for ports {'f1f868ea-4c0f-4e53-a681-5799d8314d7d'} is completed#033[00m Feb 23 04:56:21 localhost nova_compute[280321]: 2026-02-23 09:56:21.306 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:56:21 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4284316826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:56:21 localhost nova_compute[280321]: 2026-02-23 09:56:21.353 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:56:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:56:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:56:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:56:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:56:21 localhost snmpd[68131]: empty variable list in _query Feb 23 04:56:21 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:56:21 localhost nova_compute[280321]: 2026-02-23 09:56:21.564 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:56:21 localhost nova_compute[280321]: 2026-02-23 09:56:21.566 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11668MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:56:21 localhost nova_compute[280321]: 2026-02-23 09:56:21.567 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:56:21 localhost nova_compute[280321]: 2026-02-23 09:56:21.567 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:56:21 localhost nova_compute[280321]: 2026-02-23 09:56:21.623 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:56:21 localhost nova_compute[280321]: 2026-02-23 09:56:21.623 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:56:21 localhost nova_compute[280321]: 2026-02-23 09:56:21.642 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:56:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:56:22 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1198659923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:56:22 localhost nova_compute[280321]: 2026-02-23 09:56:22.065 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:56:22 localhost nova_compute[280321]: 2026-02-23 09:56:22.072 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:56:22 localhost nova_compute[280321]: 2026-02-23 09:56:22.091 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:56:22 localhost nova_compute[280321]: 2026-02-23 09:56:22.112 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:56:22 localhost nova_compute[280321]: 2026-02-23 09:56:22.113 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:56:22 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:22.147 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:22 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:22.573 2 INFO neutron.agent.securitygroups_rpc [None req-0e752e7e-c507-42b6-b334-815c25dce29c 6d15d44765db469a9e04a32fb56dcff2 2ac6a6009ea84eb99f60bd242e459002 - - default default] Security group member updated ['917bfa8c-752a-4a55-9acc-5ce6144207b4']#033[00m Feb 23 04:56:22 localhost nova_compute[280321]: 2026-02-23 09:56:22.631 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:22 localhost dnsmasq[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/addn_hosts - 1 addresses Feb 23 04:56:22 localhost podman[312834]: 2026-02-23 09:56:22.885793766 +0000 UTC m=+0.058448978 container kill d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:56:22 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/host Feb 23 04:56:22 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/opts Feb 23 04:56:23 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:23.065 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:20Z, description=, device_id=c69c2d86-62cf-4b50-baee-7117874f99be, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f1f868ea-4c0f-4e53-a681-5799d8314d7d, ip_allocation=immediate, mac_address=fa:16:3e:89:a3:2b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:56:08Z, description=, dns_domain=, id=c192d327-fd98-4512-92b0-8fe3acdfd05b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-262096085, port_security_enabled=True, project_id=a2f9492758b148768734fafb039e58db, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47611, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1306, status=ACTIVE, subnets=['8456be2d-f237-4c3f-ba6f-3fc45562c792'], tags=[], tenant_id=a2f9492758b148768734fafb039e58db, updated_at=2026-02-23T09:56:11Z, vlan_transparent=None, network_id=c192d327-fd98-4512-92b0-8fe3acdfd05b, port_security_enabled=False, project_id=a2f9492758b148768734fafb039e58db, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1342, status=DOWN, tags=[], tenant_id=a2f9492758b148768734fafb039e58db, updated_at=2026-02-23T09:56:20Z on network c192d327-fd98-4512-92b0-8fe3acdfd05b#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.113 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.114 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.114 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.128 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.128 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.129 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:56:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s Feb 23 04:56:23 localhost dnsmasq[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/addn_hosts - 1 addresses Feb 23 04:56:23 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/host Feb 23 04:56:23 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/opts Feb 23 04:56:23 localhost podman[312885]: 2026-02-23 09:56:23.295484809 +0000 UTC m=+0.068186215 container kill f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true) Feb 23 04:56:23 localhost dnsmasq[311981]: exiting on receipt of SIGTERM Feb 23 04:56:23 localhost podman[312895]: 2026-02-23 09:56:23.310939862 +0000 UTC m=+0.064327707 container kill 422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f53aefcd-9a58-44f1-8f89-81005c5482a8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:23 localhost systemd[1]: libpod-422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3.scope: Deactivated successfully. Feb 23 04:56:23 localhost podman[312939]: 2026-02-23 09:56:23.373833224 +0000 UTC m=+0.041975484 container died 422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f53aefcd-9a58-44f1-8f89-81005c5482a8, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0) Feb 23 04:56:23 localhost podman[312939]: 2026-02-23 09:56:23.411276149 +0000 UTC m=+0.079418409 container remove 422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f53aefcd-9a58-44f1-8f89-81005c5482a8, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 04:56:23 localhost systemd[1]: libpod-conmon-422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3.scope: Deactivated successfully. Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.434 280325 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.435 280325 INFO nova.compute.manager [-] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] VM Stopped (Lifecycle Event)#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.455 280325 DEBUG nova.compute.manager [None req-18c9c2fc-9034-46be-b866-5c106aca9760 - - - - - -] [instance: 66c5eac8-f6f4-40ae-b09f-54e200c103b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 23 04:56:23 localhost podman[312950]: 2026-02-23 09:56:23.456013616 +0000 UTC m=+0.099805651 container kill dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c264690-f25c-4167-b007-0eb8e2043cc5, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:56:23 localhost dnsmasq[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/addn_hosts - 0 addresses Feb 23 04:56:23 localhost dnsmasq-dhcp[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/host Feb 23 04:56:23 localhost dnsmasq-dhcp[311980]: read /var/lib/neutron/dhcp/0c264690-f25c-4167-b007-0eb8e2043cc5/opts Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.470 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:23 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:23.548 263679 INFO neutron.agent.dhcp.agent [None req-3cd372ca-659a-4539-be30-84653bd488a5 - - - - - -] DHCP configuration for ports {'f1f868ea-4c0f-4e53-a681-5799d8314d7d'} is completed#033[00m Feb 23 04:56:23 localhost ovn_controller[155966]: 2026-02-23T09:56:23Z|00177|binding|INFO|Releasing lport c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e from this chassis (sb_readonly=0) Feb 23 04:56:23 localhost kernel: device tapc7e823d0-1e left promiscuous mode Feb 23 04:56:23 localhost ovn_controller[155966]: 2026-02-23T09:56:23Z|00178|binding|INFO|Setting lport c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e down in Southbound Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.579 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:23 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:23.586 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-0c264690-f25c-4167-b007-0eb8e2043cc5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c264690-f25c-4167-b007-0eb8e2043cc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aef0e3b7cdf44eda8494e7e5c41be9f9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc5c7dd9-22ae-405c-8a3b-bb7a785ed78c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:23 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:23.587 161842 INFO neutron.agent.ovn.metadata.agent [-] Port c7e823d0-1eae-42f3-95f9-6f9d3dd8de3e in datapath 0c264690-f25c-4167-b007-0eb8e2043cc5 unbound from our chassis#033[00m Feb 23 04:56:23 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:23.590 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0c264690-f25c-4167-b007-0eb8e2043cc5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:56:23 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:23.591 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[32dd2a06-9495-4f31-ad65-c4a71eb9943a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.598 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:23 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:23.652 263679 INFO neutron.agent.dhcp.agent [None req-1672d8d6-14b6-46aa-992e-3dd50f5ef623 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:23 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:23.653 263679 INFO neutron.agent.dhcp.agent [None req-1672d8d6-14b6-46aa-992e-3dd50f5ef623 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:23 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:23.832 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:23 localhost systemd[1]: var-lib-containers-storage-overlay-a43c8c561664ee2f6afaf6d6635d6b8972130ff641b5f898a2b2bf762675dc45-merged.mount: Deactivated successfully. Feb 23 04:56:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-422fc19b44cb6738686c617399664d0c01190112406acfaaa3c7acac39dc5ef3-userdata-shm.mount: Deactivated successfully. Feb 23 04:56:23 localhost systemd[1]: run-netns-qdhcp\x2df53aefcd\x2d9a58\x2d44f1\x2d8f89\x2d81005c5482a8.mount: Deactivated successfully. Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:56:23 localhost nova_compute[280321]: 2026-02-23 09:56:23.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:56:24 localhost nova_compute[280321]: 2026-02-23 09:56:24.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:56:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:25 localhost dnsmasq[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/addn_hosts - 0 addresses Feb 23 04:56:25 localhost podman[313010]: 2026-02-23 09:56:25.181997287 +0000 UTC m=+0.062012046 container kill d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:56:25 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/host Feb 23 04:56:25 localhost dnsmasq-dhcp[309350]: read /var/lib/neutron/dhcp/488344bb-b2b1-4b3f-933b-1a9bfdff1d5c/opts Feb 23 04:56:25 localhost systemd[1]: tmp-crun.pc1kNn.mount: Deactivated successfully. Feb 23 04:56:25 localhost ovn_controller[155966]: 2026-02-23T09:56:25Z|00179|ovn_bfd|INFO|Disabled BFD on interface ovn-5b0126-0 Feb 23 04:56:25 localhost ovn_controller[155966]: 2026-02-23T09:56:25Z|00180|ovn_bfd|INFO|Disabled BFD on interface ovn-585d62-0 Feb 23 04:56:25 localhost ovn_controller[155966]: 2026-02-23T09:56:25Z|00181|ovn_bfd|INFO|Disabled BFD on interface ovn-b9c72d-0 Feb 23 04:56:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:25 localhost nova_compute[280321]: 2026-02-23 09:56:25.423 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:25 localhost nova_compute[280321]: 2026-02-23 09:56:25.479 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:25 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:25.537 2 INFO neutron.agent.securitygroups_rpc [None req-7a66e0e4-687a-49c0-b7a3-b39df7d3f4b0 70f605e811404c4bb9fe49c02ce24bf3 a2f9492758b148768734fafb039e58db - - default default] Security group member updated ['a4d30edc-cb55-4200-8dd2-93ea986a3cd5']#033[00m Feb 23 04:56:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:25.598 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:24Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=24340b14-1fde-4926-b472-6f08e18d809d, ip_allocation=immediate, mac_address=fa:16:3e:fb:a3:3a, name=tempest-FloatingIPAdminTestJSON-683732633, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:56:08Z, description=, dns_domain=, id=c192d327-fd98-4512-92b0-8fe3acdfd05b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-262096085, port_security_enabled=True, project_id=a2f9492758b148768734fafb039e58db, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47611, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1306, status=ACTIVE, subnets=['8456be2d-f237-4c3f-ba6f-3fc45562c792'], tags=[], tenant_id=a2f9492758b148768734fafb039e58db, updated_at=2026-02-23T09:56:11Z, vlan_transparent=None, network_id=c192d327-fd98-4512-92b0-8fe3acdfd05b, port_security_enabled=True, project_id=a2f9492758b148768734fafb039e58db, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a4d30edc-cb55-4200-8dd2-93ea986a3cd5'], standard_attr_id=1354, status=DOWN, tags=[], tenant_id=a2f9492758b148768734fafb039e58db, updated_at=2026-02-23T09:56:25Z on network c192d327-fd98-4512-92b0-8fe3acdfd05b#033[00m Feb 23 04:56:25 localhost nova_compute[280321]: 2026-02-23 09:56:25.610 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:25 localhost ovn_controller[155966]: 2026-02-23T09:56:25Z|00182|binding|INFO|Releasing lport 8c0a697f-2286-435e-8ce7-d3f6218fc156 from this chassis (sb_readonly=0) Feb 23 04:56:25 localhost kernel: device tap8c0a697f-22 left promiscuous mode Feb 23 04:56:25 localhost ovn_controller[155966]: 2026-02-23T09:56:25Z|00183|binding|INFO|Setting lport 8c0a697f-2286-435e-8ce7-d3f6218fc156 down in Southbound Feb 23 04:56:25 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:25.619 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ac6a6009ea84eb99f60bd242e459002', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82ecfbd-c671-4216-ac11-086490c80ba6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8c0a697f-2286-435e-8ce7-d3f6218fc156) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:25 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:25.621 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 8c0a697f-2286-435e-8ce7-d3f6218fc156 in datapath 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c unbound from our chassis#033[00m Feb 23 04:56:25 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:25.625 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:56:25 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:25.626 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8664df-3e9a-4e69-a1c0-005392766eae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:25 localhost nova_compute[280321]: 2026-02-23 09:56:25.634 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:25 localhost dnsmasq[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/addn_hosts - 2 addresses Feb 23 04:56:25 localhost podman[313049]: 2026-02-23 09:56:25.80166062 +0000 UTC m=+0.054820847 container kill f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:25 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/host Feb 23 04:56:25 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/opts Feb 23 04:56:26 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:26.014 263679 INFO neutron.agent.dhcp.agent [None req-806f87ab-8004-4240-a781-1a81888ba610 - - - - - -] DHCP configuration for ports {'24340b14-1fde-4926-b472-6f08e18d809d'} is completed#033[00m Feb 23 04:56:26 localhost dnsmasq[311980]: exiting on receipt of SIGTERM Feb 23 04:56:26 localhost podman[313086]: 2026-02-23 09:56:26.683150085 +0000 UTC m=+0.054652071 container kill dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c264690-f25c-4167-b007-0eb8e2043cc5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:56:26 localhost systemd[1]: libpod-dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb.scope: Deactivated successfully. Feb 23 04:56:26 localhost podman[313100]: 2026-02-23 09:56:26.747181423 +0000 UTC m=+0.046294706 container died dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c264690-f25c-4167-b007-0eb8e2043cc5, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:56:26 localhost systemd[1]: tmp-crun.A1Ywsa.mount: Deactivated successfully. Feb 23 04:56:26 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:26.792 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:26 localhost podman[313100]: 2026-02-23 09:56:26.850096399 +0000 UTC m=+0.149209662 container remove dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c264690-f25c-4167-b007-0eb8e2043cc5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:26 localhost systemd[1]: libpod-conmon-dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb.scope: Deactivated successfully. Feb 23 04:56:26 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:26.870 263679 INFO neutron.agent.dhcp.agent [None req-db748b25-719f-428d-8afd-b1ee51e393ea - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:27 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:27.025 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:27 localhost nova_compute[280321]: 2026-02-23 09:56:27.634 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:27 localhost systemd[1]: var-lib-containers-storage-overlay-a76128798d0fcab66800dc16b733f092e9a9b66d1dd66731b55abb24037b906f-merged.mount: Deactivated successfully. Feb 23 04:56:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd9c567bfa44d15894d7486345e640249e7c59cc4149697a4644719e79f92ceb-userdata-shm.mount: Deactivated successfully. Feb 23 04:56:27 localhost systemd[1]: run-netns-qdhcp\x2d0c264690\x2df25c\x2d4167\x2db007\x2d0eb8e2043cc5.mount: Deactivated successfully. Feb 23 04:56:27 localhost nova_compute[280321]: 2026-02-23 09:56:27.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:56:27 localhost sshd[313125]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:56:28 localhost nova_compute[280321]: 2026-02-23 09:56:28.472 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:28.787 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:09:8b 10.100.0.19 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-f6f90c3e-e9fc-4b4d-8000-6715492c6006', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6f90c3e-e9fc-4b4d-8000-6715492c6006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8532226521ac43ca82723a0b71168e03', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85b3f5b6-1b29-412a-9c40-de284e163599, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b34126a6-b855-4d0f-af32-431b42ec89f3) old=Port_Binding(mac=['fa:16:3e:23:09:8b 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-f6f90c3e-e9fc-4b4d-8000-6715492c6006', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6f90c3e-e9fc-4b4d-8000-6715492c6006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8532226521ac43ca82723a0b71168e03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:28.787 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b34126a6-b855-4d0f-af32-431b42ec89f3 in datapath f6f90c3e-e9fc-4b4d-8000-6715492c6006 updated#033[00m Feb 23 04:56:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:28.789 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6f90c3e-e9fc-4b4d-8000-6715492c6006, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:56:28 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:28.790 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[178d4a26-5575-40b4-8b0b-26d3261f897e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:56:30 localhost podman[313127]: 2026-02-23 09:56:30.016599814 +0000 UTC m=+0.084296228 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:56:30 localhost podman[313127]: 2026-02-23 09:56:30.054939076 +0000 UTC m=+0.122635450 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:56:30 localhost systemd[1]: tmp-crun.ekg6lF.mount: Deactivated successfully. Feb 23 04:56:30 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:56:30 localhost podman[313128]: 2026-02-23 09:56:30.073607917 +0000 UTC m=+0.138397052 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, version=9.7, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 04:56:30 localhost podman[313128]: 2026-02-23 09:56:30.088816092 +0000 UTC m=+0.153605267 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_id=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1770267347, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 04:56:30 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:56:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:30 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:30.910 2 INFO neutron.agent.securitygroups_rpc [None req-51f4eed0-aded-49bd-977f-d8680aeec69e 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:31 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:31.239 2 INFO neutron.agent.securitygroups_rpc [None req-2890ed0d-a00f-4342-b403-59e82c71dfe3 d6a332b0f2e446a68dc6c19280644090 2207de28dcd245d2b198a56e6161001a - - default default] Security group member updated ['ba10f066-a353-4b6a-98b8-dab53422ee14']#033[00m Feb 23 04:56:31 localhost openstack_network_exporter[243519]: ERROR 09:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:56:31 localhost openstack_network_exporter[243519]: Feb 23 04:56:31 localhost openstack_network_exporter[243519]: ERROR 09:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:56:31 localhost openstack_network_exporter[243519]: Feb 23 04:56:32 localhost nova_compute[280321]: 2026-02-23 09:56:32.665 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:32 localhost podman[313186]: 2026-02-23 09:56:32.683202858 +0000 UTC m=+0.088056702 container kill d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:56:32 localhost dnsmasq[309350]: exiting on receipt of SIGTERM Feb 23 04:56:32 localhost systemd[1]: libpod-d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e.scope: Deactivated successfully. Feb 23 04:56:32 localhost podman[313202]: 2026-02-23 09:56:32.755035414 +0000 UTC m=+0.053753764 container died d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:56:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e-userdata-shm.mount: Deactivated successfully. Feb 23 04:56:32 localhost podman[313202]: 2026-02-23 09:56:32.786837446 +0000 UTC m=+0.085555746 container cleanup d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:56:32 localhost systemd[1]: libpod-conmon-d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e.scope: Deactivated successfully. Feb 23 04:56:32 localhost podman[313201]: 2026-02-23 09:56:32.83308344 +0000 UTC m=+0.130004926 container remove d1d4246e11478c7566d6b9826d9550f987a6b90db0597ca1e79edd6eb122d69e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-488344bb-b2b1-4b3f-933b-1a9bfdff1d5c, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 23 04:56:32 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:32.981 2 INFO neutron.agent.securitygroups_rpc [None req-5d3e7d76-e87c-4157-8484-7f31d3f7ba7b 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:32 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:32.987 2 INFO neutron.agent.securitygroups_rpc [None req-f8235bf5-0ea1-4901-953e-20a5808b9d67 70f605e811404c4bb9fe49c02ce24bf3 a2f9492758b148768734fafb039e58db - - default default] Security group member updated ['a4d30edc-cb55-4200-8dd2-93ea986a3cd5']#033[00m Feb 23 04:56:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:33.229 263679 INFO neutron.agent.dhcp.agent [None req-6e5d2b43-c0d9-410f-bca8-eb7ae75b3a1d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:33 localhost dnsmasq[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/addn_hosts - 1 addresses Feb 23 04:56:33 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/host Feb 23 04:56:33 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/opts Feb 23 04:56:33 localhost podman[313245]: 2026-02-23 09:56:33.263775345 +0000 UTC m=+0.058507209 container kill f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:56:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:33.402 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:33 localhost nova_compute[280321]: 2026-02-23 09:56:33.474 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:33 localhost systemd[1]: var-lib-containers-storage-overlay-1b291ffb6bf0f9d41fbe4e0dd045821d5154d8cf2dbc3a8ea13d60b93da0259d-merged.mount: Deactivated successfully. Feb 23 04:56:33 localhost systemd[1]: run-netns-qdhcp\x2d488344bb\x2db2b1\x2d4b3f\x2d933b\x2d1a9bfdff1d5c.mount: Deactivated successfully. Feb 23 04:56:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:33.789 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:34 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:34.327 2 INFO neutron.agent.securitygroups_rpc [None req-3413d0d6-8a0f-497e-801e-d0383982e452 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:34 localhost nova_compute[280321]: 2026-02-23 09:56:34.366 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:34 localhost dnsmasq[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/addn_hosts - 0 addresses Feb 23 04:56:34 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/host Feb 23 04:56:34 localhost podman[313283]: 2026-02-23 09:56:34.419613167 +0000 UTC m=+0.061198002 container kill f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:56:34 localhost dnsmasq-dhcp[312459]: read /var/lib/neutron/dhcp/c192d327-fd98-4512-92b0-8fe3acdfd05b/opts Feb 23 04:56:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:56:34 localhost podman[313299]: 2026-02-23 09:56:34.543099572 +0000 UTC m=+0.086341441 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0) Feb 23 04:56:34 localhost podman[313299]: 2026-02-23 09:56:34.589970365 +0000 UTC m=+0.133212284 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216) Feb 23 04:56:34 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:56:34 localhost kernel: device tap7d543d78-81 left promiscuous mode Feb 23 04:56:34 localhost ovn_controller[155966]: 2026-02-23T09:56:34Z|00184|binding|INFO|Releasing lport 7d543d78-81f8-4a53-a1b1-65eb280252d1 from this chassis (sb_readonly=0) Feb 23 04:56:34 localhost nova_compute[280321]: 2026-02-23 09:56:34.796 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:34 localhost ovn_controller[155966]: 2026-02-23T09:56:34Z|00185|binding|INFO|Setting lport 7d543d78-81f8-4a53-a1b1-65eb280252d1 down in Southbound Feb 23 04:56:34 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:34.805 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-c192d327-fd98-4512-92b0-8fe3acdfd05b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c192d327-fd98-4512-92b0-8fe3acdfd05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2f9492758b148768734fafb039e58db', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4522ddbd-f343-4eb2-9faa-450e0e2e5c4b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7d543d78-81f8-4a53-a1b1-65eb280252d1) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:34 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:34.808 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 7d543d78-81f8-4a53-a1b1-65eb280252d1 in datapath c192d327-fd98-4512-92b0-8fe3acdfd05b unbound from our chassis#033[00m Feb 23 04:56:34 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:34.811 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c192d327-fd98-4512-92b0-8fe3acdfd05b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:56:34 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:34.812 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[34e2f628-0789-4254-aebf-a1645677d76d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:34 localhost nova_compute[280321]: 2026-02-23 09:56:34.817 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:56:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:56:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:56:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:56:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:56:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:56:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:35 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:35.443 2 INFO neutron.agent.securitygroups_rpc [None req-aa725386-7116-408a-a619-1f9f73c010a8 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:35 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:35.463 2 INFO neutron.agent.securitygroups_rpc [None req-ff1e8db8-3064-40a0-abdd-bddb9c09e449 d6a332b0f2e446a68dc6c19280644090 2207de28dcd245d2b198a56e6161001a - - default default] Security group member updated ['ba10f066-a353-4b6a-98b8-dab53422ee14']#033[00m Feb 23 04:56:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:36.232 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:36 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:36.513 2 INFO neutron.agent.securitygroups_rpc [None req-1fda4918-c68f-4c9d-a41e-c71c19c42e64 f6ed429d4dee4c5abef411f5952801ef 7760b87546484c7693fd48206e06d3f8 - - default default] Security group member updated ['1a09a3fa-6a99-44c4-8684-508fe117a320']#033[00m Feb 23 04:56:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:37 localhost dnsmasq[312459]: exiting on receipt of SIGTERM Feb 23 04:56:37 localhost systemd[1]: libpod-f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2.scope: Deactivated successfully. Feb 23 04:56:37 localhost podman[313351]: 2026-02-23 09:56:37.197848414 +0000 UTC m=+0.060611715 container kill f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:37 localhost podman[313365]: 2026-02-23 09:56:37.262276093 +0000 UTC m=+0.049543236 container died f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Feb 23 04:56:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2-userdata-shm.mount: Deactivated successfully. Feb 23 04:56:37 localhost podman[313365]: 2026-02-23 09:56:37.288829354 +0000 UTC m=+0.076096507 container cleanup f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:56:37 localhost systemd[1]: libpod-conmon-f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2.scope: Deactivated successfully. Feb 23 04:56:37 localhost podman[313367]: 2026-02-23 09:56:37.345044243 +0000 UTC m=+0.126513999 container remove f1aad0d5db3105f96c351aa497e7d6c282bba5fb0c8ff640e84154316ed07aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c192d327-fd98-4512-92b0-8fe3acdfd05b, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:56:37 localhost sshd[313393]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:56:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:37.613 263679 INFO neutron.agent.dhcp.agent [None req-ee3d797f-1b61-4676-b7e3-522d42ef6dbb - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:37 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:37.667 2 INFO neutron.agent.securitygroups_rpc [None req-0f5e541e-74ab-492f-8902-38d7300ec53b f6ed429d4dee4c5abef411f5952801ef 7760b87546484c7693fd48206e06d3f8 - - default default] Security group member updated ['1a09a3fa-6a99-44c4-8684-508fe117a320']#033[00m Feb 23 04:56:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:37.705 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:37 localhost nova_compute[280321]: 2026-02-23 09:56:37.708 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:37.934 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:38 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:38.120 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:38 localhost systemd[1]: var-lib-containers-storage-overlay-823dfc6087fe7122fa654e4b50271e41d11a97e11824798d8a6c41b3221df019-merged.mount: Deactivated successfully. Feb 23 04:56:38 localhost systemd[1]: run-netns-qdhcp\x2dc192d327\x2dfd98\x2d4512\x2d92b0\x2d8fe3acdfd05b.mount: Deactivated successfully. Feb 23 04:56:38 localhost nova_compute[280321]: 2026-02-23 09:56:38.361 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:38 localhost nova_compute[280321]: 2026-02-23 09:56:38.475 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:39 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:39.083 2 INFO neutron.agent.securitygroups_rpc [None req-0564baea-5d79-415c-93b0-64b1a1c7383f d6a332b0f2e446a68dc6c19280644090 2207de28dcd245d2b198a56e6161001a - - default default] Security group member updated ['ba10f066-a353-4b6a-98b8-dab53422ee14']#033[00m Feb 23 04:56:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:41 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:41.987 2 INFO neutron.agent.securitygroups_rpc [None req-00c7bab6-3ec4-412e-b4e0-8dc31e0d8362 d6a332b0f2e446a68dc6c19280644090 2207de28dcd245d2b198a56e6161001a - - default default] Security group member updated ['ba10f066-a353-4b6a-98b8-dab53422ee14']#033[00m Feb 23 04:56:42 localhost podman[313412]: 2026-02-23 09:56:42.144953978 +0000 UTC m=+0.038398355 container kill cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e426fb3b-bff4-459e-af68-5cc1456aba74, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:56:42 localhost dnsmasq[311185]: exiting on receipt of SIGTERM Feb 23 04:56:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:56:42 localhost systemd[1]: libpod-cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3.scope: Deactivated successfully. Feb 23 04:56:42 localhost systemd[1]: tmp-crun.6AWcFk.mount: Deactivated successfully. Feb 23 04:56:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:56:42 localhost podman[313428]: 2026-02-23 09:56:42.259689345 +0000 UTC m=+0.089536928 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:56:42 localhost podman[313426]: 2026-02-23 09:56:42.279952124 +0000 UTC m=+0.112911372 container died cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e426fb3b-bff4-459e-af68-5cc1456aba74, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 04:56:42 localhost podman[313428]: 2026-02-23 09:56:42.290236479 +0000 UTC m=+0.120084032 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent) Feb 23 04:56:42 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:56:42 localhost podman[313426]: 2026-02-23 09:56:42.351668156 +0000 UTC m=+0.184627384 container cleanup cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e426fb3b-bff4-459e-af68-5cc1456aba74, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:56:42 localhost systemd[1]: libpod-conmon-cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3.scope: Deactivated successfully. Feb 23 04:56:42 localhost podman[313427]: 2026-02-23 09:56:42.371152032 +0000 UTC m=+0.201846242 container remove cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e426fb3b-bff4-459e-af68-5cc1456aba74, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:42 localhost nova_compute[280321]: 2026-02-23 09:56:42.409 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:42 localhost ovn_controller[155966]: 2026-02-23T09:56:42Z|00186|binding|INFO|Releasing lport 157a1334-1dbe-4d4e-8527-c2eb20baf7f3 from this chassis (sb_readonly=0) Feb 23 04:56:42 localhost kernel: device tap157a1334-1d left promiscuous mode Feb 23 04:56:42 localhost ovn_controller[155966]: 2026-02-23T09:56:42Z|00187|binding|INFO|Setting lport 157a1334-1dbe-4d4e-8527-c2eb20baf7f3 down in Southbound Feb 23 04:56:42 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:42.428 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e426fb3b-bff4-459e-af68-5cc1456aba74', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e426fb3b-bff4-459e-af68-5cc1456aba74', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c50f2ae0a2f4cdd8225f6794547909b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4958bc92-f5b2-48de-b547-bd27c2426718, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=157a1334-1dbe-4d4e-8527-c2eb20baf7f3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:42 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:42.430 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 157a1334-1dbe-4d4e-8527-c2eb20baf7f3 in datapath e426fb3b-bff4-459e-af68-5cc1456aba74 unbound from our chassis#033[00m Feb 23 04:56:42 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:42.432 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e426fb3b-bff4-459e-af68-5cc1456aba74 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:56:42 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:42.433 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4ccad3-1ad4-4fec-86ec-5409d13692e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:42 localhost nova_compute[280321]: 2026-02-23 09:56:42.434 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:42 localhost podman[313467]: 2026-02-23 09:56:42.439715568 +0000 UTC m=+0.169903465 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_compute) Feb 23 04:56:42 localhost podman[313467]: 2026-02-23 09:56:42.453942413 +0000 UTC m=+0.184130360 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:42 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:56:42 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:42.485 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:42 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:42.579 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:42 localhost podman[241086]: time="2026-02-23T09:56:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:56:42 localhost nova_compute[280321]: 2026-02-23 09:56:42.712 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:42 localhost podman[241086]: @ - - [23/Feb/2026:09:56:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:56:42 localhost podman[241086]: @ - - [23/Feb/2026:09:56:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17824 "" "Go-http-client/1.1" Feb 23 04:56:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:43 localhost systemd[1]: var-lib-containers-storage-overlay-d4b071f659c9d8b31cd8149751ededc5cddcc39d7e659c4a8f5ba707941b0d36-merged.mount: Deactivated successfully. Feb 23 04:56:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf1a25cf3b83d4d4d500aa59f930f64185bc99d530e166b19d149671f202f5c3-userdata-shm.mount: Deactivated successfully. Feb 23 04:56:43 localhost systemd[1]: run-netns-qdhcp\x2de426fb3b\x2dbff4\x2d459e\x2daf68\x2d5cc1456aba74.mount: Deactivated successfully. Feb 23 04:56:43 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:43.162 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:43 localhost nova_compute[280321]: 2026-02-23 09:56:43.518 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:43 localhost nova_compute[280321]: 2026-02-23 09:56:43.541 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:43 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:43.855 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:56:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:45 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:45.651 2 INFO neutron.agent.securitygroups_rpc [None req-f79d3420-cd8f-4700-a586-57c3375d8a5c 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:46 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:46.317 2 INFO neutron.agent.securitygroups_rpc [None req-23858e93-d857-4321-b908-702c515a7b92 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:56:46 localhost podman[313491]: 2026-02-23 09:56:46.997881735 +0000 UTC m=+0.073171278 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:56:47 localhost podman[313491]: 2026-02-23 09:56:47.011900123 +0000 UTC m=+0.087189736 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:56:47 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:56:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:47 localhost nova_compute[280321]: 2026-02-23 09:56:47.714 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:47 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:47.756 2 INFO neutron.agent.securitygroups_rpc [None req-531eb3a8-b1e9-4d08-88ed-f2a5323c2530 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:47 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:47.897 263679 INFO neutron.agent.linux.ip_lib [None req-f409a527-cb9c-48d2-92c5-af18e722bcbe - - - - - -] Device tapdb8c35d6-8c cannot be used as it has no MAC address#033[00m Feb 23 04:56:47 localhost nova_compute[280321]: 2026-02-23 09:56:47.951 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:47 localhost kernel: device tapdb8c35d6-8c entered promiscuous mode Feb 23 04:56:47 localhost NetworkManager[5987]: [1771840607.9586] manager: (tapdb8c35d6-8c): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Feb 23 04:56:47 localhost ovn_controller[155966]: 2026-02-23T09:56:47Z|00188|binding|INFO|Claiming lport db8c35d6-8c1d-4a8a-b704-fb6e8f7174ae for this chassis. Feb 23 04:56:47 localhost ovn_controller[155966]: 2026-02-23T09:56:47Z|00189|binding|INFO|db8c35d6-8c1d-4a8a-b704-fb6e8f7174ae: Claiming unknown Feb 23 04:56:47 localhost nova_compute[280321]: 2026-02-23 09:56:47.962 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:47 localhost systemd-udevd[313524]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:56:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:47.969 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-a9104ac7-aef6-4be2-83d4-9bc5764a985a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9104ac7-aef6-4be2-83d4-9bc5764a985a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2207de28dcd245d2b198a56e6161001a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21690b00-8164-4cc2-ad79-c7d916ac6aed, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=db8c35d6-8c1d-4a8a-b704-fb6e8f7174ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:47.971 161842 INFO neutron.agent.ovn.metadata.agent [-] Port db8c35d6-8c1d-4a8a-b704-fb6e8f7174ae in datapath a9104ac7-aef6-4be2-83d4-9bc5764a985a bound to our chassis#033[00m Feb 23 04:56:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:47.973 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a9104ac7-aef6-4be2-83d4-9bc5764a985a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:56:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:47.974 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[43452279-54f6-4c55-b7d2-09626fbe1c7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:47 localhost journal[229268]: ethtool ioctl error on tapdb8c35d6-8c: No such device Feb 23 04:56:47 localhost nova_compute[280321]: 2026-02-23 09:56:47.995 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:47 localhost journal[229268]: ethtool ioctl error on tapdb8c35d6-8c: No such device Feb 23 04:56:48 localhost ovn_controller[155966]: 2026-02-23T09:56:48Z|00190|binding|INFO|Setting lport db8c35d6-8c1d-4a8a-b704-fb6e8f7174ae ovn-installed in OVS Feb 23 04:56:48 localhost ovn_controller[155966]: 2026-02-23T09:56:48Z|00191|binding|INFO|Setting lport db8c35d6-8c1d-4a8a-b704-fb6e8f7174ae up in Southbound Feb 23 04:56:48 localhost journal[229268]: ethtool ioctl error on tapdb8c35d6-8c: No such device Feb 23 04:56:48 localhost nova_compute[280321]: 2026-02-23 09:56:48.004 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:48 localhost journal[229268]: ethtool ioctl error on tapdb8c35d6-8c: No such device Feb 23 04:56:48 localhost journal[229268]: ethtool ioctl error on tapdb8c35d6-8c: No such device Feb 23 04:56:48 localhost journal[229268]: ethtool ioctl error on tapdb8c35d6-8c: No such device Feb 23 04:56:48 localhost journal[229268]: ethtool ioctl error on tapdb8c35d6-8c: No such device Feb 23 04:56:48 localhost journal[229268]: ethtool ioctl error on tapdb8c35d6-8c: No such device Feb 23 04:56:48 localhost nova_compute[280321]: 2026-02-23 09:56:48.032 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:48 localhost nova_compute[280321]: 2026-02-23 09:56:48.058 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:48.313 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:56:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:48.313 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:56:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:48.313 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:56:48 localhost nova_compute[280321]: 2026-02-23 09:56:48.520 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:48 localhost podman[313595]: Feb 23 04:56:48 localhost podman[313595]: 2026-02-23 09:56:48.853477116 +0000 UTC m=+0.087446854 container create 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:56:48 localhost systemd[1]: Started libpod-conmon-4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e.scope. Feb 23 04:56:48 localhost systemd[1]: Started libcrun container. Feb 23 04:56:48 localhost podman[313595]: 2026-02-23 09:56:48.812649319 +0000 UTC m=+0.046619097 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:56:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92cce04e15a582e6c92ea4dea81aff66b199b3a80a9cc4ee7e0b26a9b2fbe816/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:56:48 localhost podman[313595]: 2026-02-23 09:56:48.922531937 +0000 UTC m=+0.156501705 container init 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 23 04:56:48 localhost podman[313595]: 2026-02-23 09:56:48.933831823 +0000 UTC m=+0.167801571 container start 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216) Feb 23 04:56:48 localhost dnsmasq[313613]: started, version 2.85 cachesize 150 Feb 23 04:56:48 localhost dnsmasq[313613]: DNS service limited to local subnets Feb 23 04:56:48 localhost dnsmasq[313613]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:56:48 localhost dnsmasq[313613]: warning: no upstream servers configured Feb 23 04:56:48 localhost dnsmasq-dhcp[313613]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:56:48 localhost dnsmasq[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/addn_hosts - 0 addresses Feb 23 04:56:48 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/host Feb 23 04:56:48 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/opts Feb 23 04:56:49 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:49.053 2 INFO neutron.agent.securitygroups_rpc [None req-7c4570b4-c3dc-480d-b91f-3f4320c93168 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:49.151 263679 INFO neutron.agent.dhcp.agent [None req-874f18eb-a4ae-4d31-b3b4-c8d2a55e089a - - - - - -] DHCP configuration for ports {'83be9cbf-b14d-4862-a45f-a6243b820ad2'} is completed#033[00m Feb 23 04:56:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:49.883 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:49.883 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:56:49 localhost nova_compute[280321]: 2026-02-23 09:56:49.884 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:49 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:49.980 2 INFO neutron.agent.securitygroups_rpc [None req-df38b538-2f68-43d4-a4e5-1e14d933a2a9 a882fa93577048b68025b6e97dbb9195 e8630a66fd9f41828b0bd2cf93b5956f - - default default] Security group member updated ['ea9a997e-7b09-4599-8d8f-c6dc5472496e']#033[00m Feb 23 04:56:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:50 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:50.589 2 INFO neutron.agent.securitygroups_rpc [None req-8eb799ea-d360-4da4-8f9a-9902d730dcc7 a882fa93577048b68025b6e97dbb9195 e8630a66fd9f41828b0bd2cf93b5956f - - default default] Security group member updated ['ea9a997e-7b09-4599-8d8f-c6dc5472496e']#033[00m Feb 23 04:56:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:51 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:51.917 2 INFO neutron.agent.securitygroups_rpc [None req-b1440841-9f18-4923-88ab-a2d6a50a7349 a882fa93577048b68025b6e97dbb9195 e8630a66fd9f41828b0bd2cf93b5956f - - default default] Security group member updated ['ea9a997e-7b09-4599-8d8f-c6dc5472496e']#033[00m Feb 23 04:56:52 localhost nova_compute[280321]: 2026-02-23 09:56:52.750 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:53 localhost nova_compute[280321]: 2026-02-23 09:56:53.575 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:54 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:54.528 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:54Z, description=, device_id=328b96fa-3329-414d-86da-e1b721cc3bc5, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=71bead84-1f9a-490a-9d05-235b9838a31c, ip_allocation=immediate, mac_address=fa:16:3e:39:66:1f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:56:45Z, description=, dns_domain=, id=a9104ac7-aef6-4be2-83d4-9bc5764a985a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1653868193, port_security_enabled=True, project_id=2207de28dcd245d2b198a56e6161001a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8531, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1438, status=ACTIVE, subnets=['3f02a21b-4432-4bdd-a0e5-6938a67de1d9'], tags=[], tenant_id=2207de28dcd245d2b198a56e6161001a, updated_at=2026-02-23T09:56:47Z, vlan_transparent=None, network_id=a9104ac7-aef6-4be2-83d4-9bc5764a985a, port_security_enabled=False, project_id=2207de28dcd245d2b198a56e6161001a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1495, status=DOWN, tags=[], tenant_id=2207de28dcd245d2b198a56e6161001a, updated_at=2026-02-23T09:56:54Z on network a9104ac7-aef6-4be2-83d4-9bc5764a985a#033[00m Feb 23 04:56:54 localhost dnsmasq[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/addn_hosts - 1 addresses Feb 23 04:56:54 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/host Feb 23 04:56:54 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/opts Feb 23 04:56:54 localhost podman[313630]: 2026-02-23 09:56:54.76263195 +0000 UTC m=+0.060657786 container kill 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:56:55 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:55.049 263679 INFO neutron.agent.dhcp.agent [None req-1b063e5e-e31a-4ef2-9dbb-85c4011a520a - - - - - -] DHCP configuration for ports {'71bead84-1f9a-490a-9d05-235b9838a31c'} is completed#033[00m Feb 23 04:56:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:55 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:55.197 263679 INFO neutron.agent.linux.ip_lib [None req-db45a369-29c3-49b1-b07a-8f8bd2b1fb58 - - - - - -] Device tap36e70d58-1a cannot be used as it has no MAC address#033[00m Feb 23 04:56:55 localhost nova_compute[280321]: 2026-02-23 09:56:55.265 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:55 localhost kernel: device tap36e70d58-1a entered promiscuous mode Feb 23 04:56:55 localhost NetworkManager[5987]: [1771840615.2725] manager: (tap36e70d58-1a): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Feb 23 04:56:55 localhost nova_compute[280321]: 2026-02-23 09:56:55.273 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:55 localhost systemd-udevd[313662]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:56:55 localhost ovn_controller[155966]: 2026-02-23T09:56:55Z|00192|binding|INFO|Claiming lport 36e70d58-1a66-46c7-9928-9a22c8c7a8b5 for this chassis. Feb 23 04:56:55 localhost ovn_controller[155966]: 2026-02-23T09:56:55Z|00193|binding|INFO|36e70d58-1a66-46c7-9928-9a22c8c7a8b5: Claiming unknown Feb 23 04:56:55 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:55.291 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-a8f6a7ea-1099-486c-bb34-40942bc5a557', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8f6a7ea-1099-486c-bb34-40942bc5a557', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8532226521ac43ca82723a0b71168e03', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d481761-ca29-4346-9a08-da5ade8fd939, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=36e70d58-1a66-46c7-9928-9a22c8c7a8b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:55 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:55.292 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 36e70d58-1a66-46c7-9928-9a22c8c7a8b5 in datapath a8f6a7ea-1099-486c-bb34-40942bc5a557 bound to our chassis#033[00m Feb 23 04:56:55 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:55.294 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a8f6a7ea-1099-486c-bb34-40942bc5a557 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:56:55 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:55.296 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[3662b3cd-5842-4367-9132-fc5de1f1c648]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:55 localhost journal[229268]: ethtool ioctl error on tap36e70d58-1a: No such device Feb 23 04:56:55 localhost ovn_controller[155966]: 2026-02-23T09:56:55Z|00194|binding|INFO|Setting lport 36e70d58-1a66-46c7-9928-9a22c8c7a8b5 ovn-installed in OVS Feb 23 04:56:55 localhost ovn_controller[155966]: 2026-02-23T09:56:55Z|00195|binding|INFO|Setting lport 36e70d58-1a66-46c7-9928-9a22c8c7a8b5 up in Southbound Feb 23 04:56:55 localhost nova_compute[280321]: 2026-02-23 09:56:55.306 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:55 localhost journal[229268]: ethtool ioctl error on tap36e70d58-1a: No such device Feb 23 04:56:55 localhost journal[229268]: ethtool ioctl error on tap36e70d58-1a: No such device Feb 23 04:56:55 localhost journal[229268]: ethtool ioctl error on tap36e70d58-1a: No such device Feb 23 04:56:55 localhost journal[229268]: ethtool ioctl error on tap36e70d58-1a: No such device Feb 23 04:56:55 localhost journal[229268]: ethtool ioctl error on tap36e70d58-1a: No such device Feb 23 04:56:55 localhost journal[229268]: ethtool ioctl error on tap36e70d58-1a: No such device Feb 23 04:56:55 localhost journal[229268]: ethtool ioctl error on tap36e70d58-1a: No such device Feb 23 04:56:55 localhost nova_compute[280321]: 2026-02-23 09:56:55.338 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:55 localhost nova_compute[280321]: 2026-02-23 09:56:55.363 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:55 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:55.382 2 INFO neutron.agent.securitygroups_rpc [None req-7f69dcdd-1ed9-444c-bbd4-b59ea7457a69 a882fa93577048b68025b6e97dbb9195 e8630a66fd9f41828b0bd2cf93b5956f - - default default] Security group member updated ['ea9a997e-7b09-4599-8d8f-c6dc5472496e']#033[00m Feb 23 04:56:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:56:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:56:56 localhost podman[313733]: Feb 23 04:56:56 localhost podman[313733]: 2026-02-23 09:56:56.178351465 +0000 UTC m=+0.088795165 container create b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8f6a7ea-1099-486c-bb34-40942bc5a557, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0) Feb 23 04:56:56 localhost systemd[1]: Started libpod-conmon-b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb.scope. Feb 23 04:56:56 localhost podman[313733]: 2026-02-23 09:56:56.132313878 +0000 UTC m=+0.042757628 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:56:56 localhost systemd[1]: Started libcrun container. Feb 23 04:56:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2d26ea7203ac58308326d905910963253ee77dfc6ca25028d1f995a8018cb8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:56:56 localhost podman[313733]: 2026-02-23 09:56:56.249319034 +0000 UTC m=+0.159762704 container init b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8f6a7ea-1099-486c-bb34-40942bc5a557, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:56:56 localhost podman[313733]: 2026-02-23 09:56:56.259046322 +0000 UTC m=+0.169489992 container start b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8f6a7ea-1099-486c-bb34-40942bc5a557, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260216) Feb 23 04:56:56 localhost dnsmasq[313751]: started, version 2.85 cachesize 150 Feb 23 04:56:56 localhost dnsmasq[313751]: DNS service limited to local subnets Feb 23 04:56:56 localhost dnsmasq[313751]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:56:56 localhost dnsmasq[313751]: warning: no upstream servers configured Feb 23 04:56:56 localhost dnsmasq-dhcp[313751]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:56:56 localhost dnsmasq[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/addn_hosts - 0 addresses Feb 23 04:56:56 localhost dnsmasq-dhcp[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/host Feb 23 04:56:56 localhost dnsmasq-dhcp[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/opts Feb 23 04:56:56 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:56.445 263679 INFO neutron.agent.dhcp.agent [None req-fc2751a7-b287-46a5-a6ae-61699ccd0a32 - - - - - -] DHCP configuration for ports {'49334c8f-117b-441b-9a3d-9f8b4722d95e'} is completed#033[00m Feb 23 04:56:56 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:56.521 263679 INFO neutron.agent.linux.ip_lib [None req-09c95fe2-0aef-481d-b134-849effd2d379 - - - - - -] Device tap9bbc947d-cd cannot be used as it has no MAC address#033[00m Feb 23 04:56:56 localhost nova_compute[280321]: 2026-02-23 09:56:56.544 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:56 localhost kernel: device tap9bbc947d-cd entered promiscuous mode Feb 23 04:56:56 localhost systemd-udevd[313664]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:56:56 localhost NetworkManager[5987]: [1771840616.5511] manager: (tap9bbc947d-cd): new Generic device (/org/freedesktop/NetworkManager/Devices/41) Feb 23 04:56:56 localhost ovn_controller[155966]: 2026-02-23T09:56:56Z|00196|binding|INFO|Claiming lport 9bbc947d-cd82-4424-9901-5722da248b69 for this chassis. Feb 23 04:56:56 localhost ovn_controller[155966]: 2026-02-23T09:56:56Z|00197|binding|INFO|9bbc947d-cd82-4424-9901-5722da248b69: Claiming unknown Feb 23 04:56:56 localhost nova_compute[280321]: 2026-02-23 09:56:56.555 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:56 localhost ovn_controller[155966]: 2026-02-23T09:56:56Z|00198|binding|INFO|Setting lport 9bbc947d-cd82-4424-9901-5722da248b69 ovn-installed in OVS Feb 23 04:56:56 localhost nova_compute[280321]: 2026-02-23 09:56:56.586 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:56 localhost nova_compute[280321]: 2026-02-23 09:56:56.614 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:56 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:56.634 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:54Z, description=, device_id=328b96fa-3329-414d-86da-e1b721cc3bc5, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=71bead84-1f9a-490a-9d05-235b9838a31c, ip_allocation=immediate, mac_address=fa:16:3e:39:66:1f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:56:45Z, description=, dns_domain=, id=a9104ac7-aef6-4be2-83d4-9bc5764a985a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1653868193, port_security_enabled=True, project_id=2207de28dcd245d2b198a56e6161001a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8531, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1438, status=ACTIVE, subnets=['3f02a21b-4432-4bdd-a0e5-6938a67de1d9'], tags=[], tenant_id=2207de28dcd245d2b198a56e6161001a, updated_at=2026-02-23T09:56:47Z, vlan_transparent=None, network_id=a9104ac7-aef6-4be2-83d4-9bc5764a985a, port_security_enabled=False, project_id=2207de28dcd245d2b198a56e6161001a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1495, status=DOWN, tags=[], tenant_id=2207de28dcd245d2b198a56e6161001a, updated_at=2026-02-23T09:56:54Z on network a9104ac7-aef6-4be2-83d4-9bc5764a985a#033[00m Feb 23 04:56:56 localhost nova_compute[280321]: 2026-02-23 09:56:56.638 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:56 localhost ovn_controller[155966]: 2026-02-23T09:56:56Z|00199|binding|INFO|Setting lport 9bbc947d-cd82-4424-9901-5722da248b69 up in Southbound Feb 23 04:56:56 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:56.734 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-df9a0c8e-e671-4b2e-aa14-898ba9b89fcb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df9a0c8e-e671-4b2e-aa14-898ba9b89fcb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ab81953d004010a22a72d978d31c4d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12036df7-8f39-4f64-b41c-6015f258cddb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9bbc947d-cd82-4424-9901-5722da248b69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:56:56 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:56.736 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 9bbc947d-cd82-4424-9901-5722da248b69 in datapath df9a0c8e-e671-4b2e-aa14-898ba9b89fcb bound to our chassis#033[00m Feb 23 04:56:56 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:56.738 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df9a0c8e-e671-4b2e-aa14-898ba9b89fcb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:56:56 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:56.739 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef180ff-2bfb-4685-92fb-22d34b3f2f11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:56:57 localhost dnsmasq[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/addn_hosts - 1 addresses Feb 23 04:56:57 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/host Feb 23 04:56:57 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/opts Feb 23 04:56:57 localhost podman[313805]: 2026-02-23 09:56:57.028635497 +0000 UTC m=+0.057666424 container kill 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:56:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:56:57 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:57.191 263679 INFO neutron.agent.dhcp.agent [None req-af3f60a7-c599-42b2-a223-2859c0831526 - - - - - -] DHCP configuration for ports {'71bead84-1f9a-490a-9d05-235b9838a31c'} is completed#033[00m Feb 23 04:56:57 localhost podman[313852]: Feb 23 04:56:57 localhost podman[313852]: 2026-02-23 09:56:57.401774694 +0000 UTC m=+0.085661310 container create 6839d6e5105b6e019fc7b446758235add5fe731ce32ec4739df13bd00cec4232 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df9a0c8e-e671-4b2e-aa14-898ba9b89fcb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:56:57 localhost systemd[1]: Started libpod-conmon-6839d6e5105b6e019fc7b446758235add5fe731ce32ec4739df13bd00cec4232.scope. Feb 23 04:56:57 localhost systemd[1]: tmp-crun.BzebL9.mount: Deactivated successfully. Feb 23 04:56:57 localhost systemd[1]: Started libcrun container. Feb 23 04:56:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4118eb7206b18f6ccdcc973c85dc60f9024dbc1a5859e63e84e6e36b63acb583/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:56:57 localhost podman[313852]: 2026-02-23 09:56:57.361901925 +0000 UTC m=+0.045788571 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:56:57 localhost podman[313852]: 2026-02-23 09:56:57.466417139 +0000 UTC m=+0.150303755 container init 6839d6e5105b6e019fc7b446758235add5fe731ce32ec4739df13bd00cec4232 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df9a0c8e-e671-4b2e-aa14-898ba9b89fcb, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:56:57 localhost podman[313852]: 2026-02-23 09:56:57.475551198 +0000 UTC m=+0.159437814 container start 6839d6e5105b6e019fc7b446758235add5fe731ce32ec4739df13bd00cec4232 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df9a0c8e-e671-4b2e-aa14-898ba9b89fcb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true) Feb 23 04:56:57 localhost dnsmasq[313870]: started, version 2.85 cachesize 150 Feb 23 04:56:57 localhost dnsmasq[313870]: DNS service limited to local subnets Feb 23 04:56:57 localhost dnsmasq[313870]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:56:57 localhost dnsmasq[313870]: warning: no upstream servers configured Feb 23 04:56:57 localhost dnsmasq-dhcp[313870]: DHCPv6, static leases only on 2001:db8:0:ffff::, lease time 1d Feb 23 04:56:57 localhost dnsmasq[313870]: read /var/lib/neutron/dhcp/df9a0c8e-e671-4b2e-aa14-898ba9b89fcb/addn_hosts - 0 addresses Feb 23 04:56:57 localhost dnsmasq-dhcp[313870]: read /var/lib/neutron/dhcp/df9a0c8e-e671-4b2e-aa14-898ba9b89fcb/host Feb 23 04:56:57 localhost dnsmasq-dhcp[313870]: read /var/lib/neutron/dhcp/df9a0c8e-e671-4b2e-aa14-898ba9b89fcb/opts Feb 23 04:56:57 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:57.624 263679 INFO neutron.agent.dhcp.agent [None req-a01b1702-e891-432f-8043-e4d3fc920043 - - - - - -] DHCP configuration for ports {'531da1c6-cbef-40ba-b0fe-888b0dda5c08'} is completed#033[00m Feb 23 04:56:57 localhost nova_compute[280321]: 2026-02-23 09:56:57.751 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:58 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:58.199 2 INFO neutron.agent.securitygroups_rpc [None req-527daa95-1754-43ce-9cc1-4adbaa0b338f d6a332b0f2e446a68dc6c19280644090 2207de28dcd245d2b198a56e6161001a - - default default] Security group member updated ['ba10f066-a353-4b6a-98b8-dab53422ee14']#033[00m Feb 23 04:56:58 localhost neutron_sriov_agent[256355]: 2026-02-23 09:56:58.217 2 INFO neutron.agent.securitygroups_rpc [None req-7c0e711d-1075-4bc5-843f-4c1a8d666461 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:56:58 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:58.251 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:57Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b3a0f54f-c670-4432-8ced-c809fb51d2d5, ip_allocation=immediate, mac_address=fa:16:3e:8c:ca:a6, name=tempest-FloatingIPTestJSON-1303790913, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:56:45Z, description=, dns_domain=, id=a9104ac7-aef6-4be2-83d4-9bc5764a985a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1653868193, port_security_enabled=True, project_id=2207de28dcd245d2b198a56e6161001a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8531, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1438, status=ACTIVE, subnets=['3f02a21b-4432-4bdd-a0e5-6938a67de1d9'], tags=[], tenant_id=2207de28dcd245d2b198a56e6161001a, updated_at=2026-02-23T09:56:47Z, vlan_transparent=None, network_id=a9104ac7-aef6-4be2-83d4-9bc5764a985a, port_security_enabled=True, project_id=2207de28dcd245d2b198a56e6161001a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ba10f066-a353-4b6a-98b8-dab53422ee14'], standard_attr_id=1515, status=DOWN, tags=[], tenant_id=2207de28dcd245d2b198a56e6161001a, updated_at=2026-02-23T09:56:57Z on network a9104ac7-aef6-4be2-83d4-9bc5764a985a#033[00m Feb 23 04:56:58 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:58.299 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:57Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5c1e38b4-eec4-49c6-9122-f47066c66743, ip_allocation=immediate, mac_address=fa:16:3e:03:ca:96, name=tempest-PortsTestJSON-1355337599, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:56:51Z, description=, dns_domain=, id=a8f6a7ea-1099-486c-bb34-40942bc5a557, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1144460861, port_security_enabled=True, project_id=8532226521ac43ca82723a0b71168e03, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2379, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1461, status=ACTIVE, subnets=['c1d2bf78-35aa-41e1-a1cb-d37293b3a53e'], tags=[], tenant_id=8532226521ac43ca82723a0b71168e03, updated_at=2026-02-23T09:56:53Z, vlan_transparent=None, network_id=a8f6a7ea-1099-486c-bb34-40942bc5a557, port_security_enabled=True, project_id=8532226521ac43ca82723a0b71168e03, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['709ad995-bfde-4096-a0b4-2ba30248a611'], standard_attr_id=1516, status=DOWN, tags=[], tenant_id=8532226521ac43ca82723a0b71168e03, updated_at=2026-02-23T09:56:57Z on network a8f6a7ea-1099-486c-bb34-40942bc5a557#033[00m Feb 23 04:56:58 localhost dnsmasq[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/addn_hosts - 2 addresses Feb 23 04:56:58 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/host Feb 23 04:56:58 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/opts Feb 23 04:56:58 localhost podman[313901]: 2026-02-23 09:56:58.460465586 +0000 UTC m=+0.060604054 container kill 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:56:58 localhost systemd[1]: tmp-crun.Qe3gHU.mount: Deactivated successfully. Feb 23 04:56:58 localhost podman[313910]: 2026-02-23 09:56:58.478576539 +0000 UTC m=+0.056827027 container kill b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8f6a7ea-1099-486c-bb34-40942bc5a557, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:56:58 localhost dnsmasq[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/addn_hosts - 1 addresses Feb 23 04:56:58 localhost dnsmasq-dhcp[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/host Feb 23 04:56:58 localhost dnsmasq-dhcp[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/opts Feb 23 04:56:58 localhost nova_compute[280321]: 2026-02-23 09:56:58.577 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:56:58 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:56:58.807 263679 INFO neutron.agent.dhcp.agent [None req-3e2ae580-a2eb-489b-864f-4f15d6dd08ef - - - - - -] DHCP configuration for ports {'b3a0f54f-c670-4432-8ced-c809fb51d2d5', '5c1e38b4-eec4-49c6-9122-f47066c66743'} is completed#033[00m Feb 23 04:56:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:56:58.885 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:56:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:57:01 localhost podman[313946]: 2026-02-23 09:57:01.011402273 +0000 UTC m=+0.079576963 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, architecture=x86_64, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container) Feb 23 04:57:01 localhost podman[313946]: 2026-02-23 09:57:01.022866253 +0000 UTC m=+0.091040923 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 23 04:57:01 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:57:01 localhost podman[313945]: 2026-02-23 09:57:01.105889842 +0000 UTC m=+0.177149926 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:57:01 localhost podman[313945]: 2026-02-23 09:57:01.117751524 +0000 UTC m=+0.189011608 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:57:01 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:57:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:01 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:01.837 2 INFO neutron.agent.securitygroups_rpc [None req-7a651b51-72a3-44e2-bd5a-07e4363f8263 3247f4a1ec054de78018b025a6933ab5 13ab81953d004010a22a72d978d31c4d - - default default] Security group member updated ['b31260d7-60e9-40f6-abcc-b3b02fd41e3c']#033[00m Feb 23 04:57:01 localhost openstack_network_exporter[243519]: ERROR 09:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:57:01 localhost openstack_network_exporter[243519]: Feb 23 04:57:01 localhost openstack_network_exporter[243519]: ERROR 09:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:57:01 localhost openstack_network_exporter[243519]: Feb 23 04:57:02 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:02.041 2 INFO neutron.agent.securitygroups_rpc [None req-861b86c2-eda9-4017-bb33-3609a7ddd89c d6a332b0f2e446a68dc6c19280644090 2207de28dcd245d2b198a56e6161001a - - default default] Security group member updated ['ba10f066-a353-4b6a-98b8-dab53422ee14']#033[00m Feb 23 04:57:02 localhost dnsmasq[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/addn_hosts - 1 addresses Feb 23 04:57:02 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/host Feb 23 04:57:02 localhost podman[314005]: 2026-02-23 09:57:02.707318156 +0000 UTC m=+0.059536432 container kill 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 23 04:57:02 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/opts Feb 23 04:57:02 localhost nova_compute[280321]: 2026-02-23 09:57:02.783 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:03 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:03.104 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:56:57Z, description=, device_id=efe8ef0e-839d-46b1-8a59-4f2da768157f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5c1e38b4-eec4-49c6-9122-f47066c66743, ip_allocation=immediate, mac_address=fa:16:3e:03:ca:96, name=tempest-PortsTestJSON-1355337599, network_id=a8f6a7ea-1099-486c-bb34-40942bc5a557, port_security_enabled=True, project_id=8532226521ac43ca82723a0b71168e03, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['709ad995-bfde-4096-a0b4-2ba30248a611'], standard_attr_id=1516, status=DOWN, tags=[], tenant_id=8532226521ac43ca82723a0b71168e03, updated_at=2026-02-23T09:56:59Z on network a8f6a7ea-1099-486c-bb34-40942bc5a557#033[00m Feb 23 04:57:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:03 localhost dnsmasq[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/addn_hosts - 1 addresses Feb 23 04:57:03 localhost dnsmasq-dhcp[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/host Feb 23 04:57:03 localhost podman[314041]: 2026-02-23 09:57:03.295906008 +0000 UTC m=+0.055954931 container kill b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8f6a7ea-1099-486c-bb34-40942bc5a557, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:57:03 localhost dnsmasq-dhcp[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/opts Feb 23 04:57:03 localhost systemd[1]: tmp-crun.AOgzUw.mount: Deactivated successfully. Feb 23 04:57:03 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:03.513 263679 INFO neutron.agent.dhcp.agent [None req-698d1c38-74d3-48f0-9568-95439c6347de - - - - - -] DHCP configuration for ports {'5c1e38b4-eec4-49c6-9122-f47066c66743'} is completed#033[00m Feb 23 04:57:03 localhost nova_compute[280321]: 2026-02-23 09:57:03.578 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:57:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2669093195' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:57:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:57:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2669093195' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:57:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:57:04 localhost dnsmasq[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/addn_hosts - 0 addresses Feb 23 04:57:04 localhost podman[314076]: 2026-02-23 09:57:04.984377472 +0000 UTC m=+0.064463881 container kill 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:57:04 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/host Feb 23 04:57:04 localhost dnsmasq-dhcp[313613]: read /var/lib/neutron/dhcp/a9104ac7-aef6-4be2-83d4-9bc5764a985a/opts Feb 23 04:57:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:57:05 Feb 23 04:57:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:57:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 04:57:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'manila_data', 'volumes', 'manila_metadata', 'vms'] Feb 23 04:57:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 04:57:05 localhost podman[314077]: 2026-02-23 09:57:05.054498016 +0000 UTC m=+0.130621725 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:57:05 localhost podman[314077]: 2026-02-23 09:57:05.089925038 +0000 UTC m=+0.166048697 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:57:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:57:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:57:05 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:57:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:57:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:57:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:57:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:57:05 localhost ovn_controller[155966]: 2026-02-23T09:57:05Z|00200|binding|INFO|Releasing lport db8c35d6-8c1d-4a8a-b704-fb6e8f7174ae from this chassis (sb_readonly=0) Feb 23 04:57:05 localhost ovn_controller[155966]: 2026-02-23T09:57:05Z|00201|binding|INFO|Setting lport db8c35d6-8c1d-4a8a-b704-fb6e8f7174ae down in Southbound Feb 23 04:57:05 localhost nova_compute[280321]: 2026-02-23 09:57:05.212 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:05 localhost kernel: device tapdb8c35d6-8c left promiscuous mode Feb 23 04:57:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:05.221 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-a9104ac7-aef6-4be2-83d4-9bc5764a985a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9104ac7-aef6-4be2-83d4-9bc5764a985a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2207de28dcd245d2b198a56e6161001a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21690b00-8164-4cc2-ad79-c7d916ac6aed, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=db8c35d6-8c1d-4a8a-b704-fb6e8f7174ae) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:57:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:05.224 161842 INFO neutron.agent.ovn.metadata.agent [-] Port db8c35d6-8c1d-4a8a-b704-fb6e8f7174ae in datapath a9104ac7-aef6-4be2-83d4-9bc5764a985a unbound from our chassis#033[00m Feb 23 04:57:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:57:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:05.227 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a9104ac7-aef6-4be2-83d4-9bc5764a985a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:57:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:05.228 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2c963d-4efd-450d-b9b7-f22e642cc420]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:57:05 localhost nova_compute[280321]: 2026-02-23 09:57:05.235 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:57:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Feb 23 04:57:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:57:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:57:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:57:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:57:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:57:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:57:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:57:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:57:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:05 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:05.520 2 INFO neutron.agent.securitygroups_rpc [None req-12c23e79-c442-4ba1-a515-53d2f774a574 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:57:05 localhost dnsmasq[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/addn_hosts - 0 addresses Feb 23 04:57:05 localhost dnsmasq-dhcp[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/host Feb 23 04:57:05 localhost podman[314138]: 2026-02-23 09:57:05.765703175 +0000 UTC m=+0.046705678 container kill b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8f6a7ea-1099-486c-bb34-40942bc5a557, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:57:05 localhost dnsmasq-dhcp[313751]: read /var/lib/neutron/dhcp/a8f6a7ea-1099-486c-bb34-40942bc5a557/opts Feb 23 04:57:06 localhost ovn_controller[155966]: 2026-02-23T09:57:06Z|00202|binding|INFO|Releasing lport 36e70d58-1a66-46c7-9928-9a22c8c7a8b5 from this chassis (sb_readonly=0) Feb 23 04:57:06 localhost ovn_controller[155966]: 2026-02-23T09:57:06Z|00203|binding|INFO|Setting lport 36e70d58-1a66-46c7-9928-9a22c8c7a8b5 down in Southbound Feb 23 04:57:06 localhost nova_compute[280321]: 2026-02-23 09:57:06.390 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:06 localhost kernel: device tap36e70d58-1a left promiscuous mode Feb 23 04:57:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:06.404 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-a8f6a7ea-1099-486c-bb34-40942bc5a557', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8f6a7ea-1099-486c-bb34-40942bc5a557', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8532226521ac43ca82723a0b71168e03', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d481761-ca29-4346-9a08-da5ade8fd939, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=36e70d58-1a66-46c7-9928-9a22c8c7a8b5) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:06.406 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 36e70d58-1a66-46c7-9928-9a22c8c7a8b5 in datapath a8f6a7ea-1099-486c-bb34-40942bc5a557 unbound from our chassis#033[00m Feb 23 04:57:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:06.409 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8f6a7ea-1099-486c-bb34-40942bc5a557, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:57:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:06.410 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe18201-5511-4def-8e1c-60f2d8c57f25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:06 localhost nova_compute[280321]: 2026-02-23 09:57:06.413 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:07 localhost dnsmasq[313613]: exiting on receipt of SIGTERM Feb 23 04:57:07 localhost podman[314178]: 2026-02-23 09:57:07.174131559 +0000 UTC m=+0.058894982 container kill 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:57:07 localhost systemd[1]: libpod-4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e.scope: Deactivated successfully. Feb 23 04:57:07 localhost podman[314192]: 2026-02-23 09:57:07.24779511 +0000 UTC m=+0.051522035 container died 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 04:57:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e-userdata-shm.mount: Deactivated successfully. Feb 23 04:57:07 localhost systemd[1]: var-lib-containers-storage-overlay-92cce04e15a582e6c92ea4dea81aff66b199b3a80a9cc4ee7e0b26a9b2fbe816-merged.mount: Deactivated successfully. Feb 23 04:57:07 localhost podman[314192]: 2026-02-23 09:57:07.346592311 +0000 UTC m=+0.150319196 container remove 4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9104ac7-aef6-4be2-83d4-9bc5764a985a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 23 04:57:07 localhost systemd[1]: libpod-conmon-4b2edf3a795580eb50b49454dfb89cad28dbedab13aed2b8206959f0785e745e.scope: Deactivated successfully. Feb 23 04:57:07 localhost systemd[1]: run-netns-qdhcp\x2da9104ac7\x2daef6\x2d4be2\x2d83d4\x2d9bc5764a985a.mount: Deactivated successfully. Feb 23 04:57:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:07.374 263679 INFO neutron.agent.dhcp.agent [None req-72aae12d-86b0-4a2c-aaa2-477b84b6ae71 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:07.401 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:07 localhost nova_compute[280321]: 2026-02-23 09:57:07.716 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:07 localhost nova_compute[280321]: 2026-02-23 09:57:07.785 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:08 localhost nova_compute[280321]: 2026-02-23 09:57:08.587 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:08 localhost dnsmasq[313751]: exiting on receipt of SIGTERM Feb 23 04:57:08 localhost podman[314235]: 2026-02-23 09:57:08.641538736 +0000 UTC m=+0.065511804 container kill b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8f6a7ea-1099-486c-bb34-40942bc5a557, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:57:08 localhost systemd[1]: libpod-b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb.scope: Deactivated successfully. Feb 23 04:57:08 localhost podman[314247]: 2026-02-23 09:57:08.712711071 +0000 UTC m=+0.054201828 container died b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8f6a7ea-1099-486c-bb34-40942bc5a557, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:57:08 localhost systemd[1]: tmp-crun.SvI129.mount: Deactivated successfully. Feb 23 04:57:08 localhost podman[314247]: 2026-02-23 09:57:08.748350961 +0000 UTC m=+0.089841688 container cleanup b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8f6a7ea-1099-486c-bb34-40942bc5a557, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:57:08 localhost systemd[1]: libpod-conmon-b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb.scope: Deactivated successfully. Feb 23 04:57:08 localhost podman[314249]: 2026-02-23 09:57:08.798650948 +0000 UTC m=+0.133903304 container remove b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8f6a7ea-1099-486c-bb34-40942bc5a557, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:57:08 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:08.919 2 INFO neutron.agent.securitygroups_rpc [None req-0e398338-d100-41a6-9496-902001e5b466 9a9c222b96714eb4b3e886d05c8c4dce 8aef873a00904cab867a4692ec3a78cb - - default default] Security group member updated ['58404006-bc14-42d2-ad0c-1fbb19168177']#033[00m Feb 23 04:57:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:09.193 263679 INFO neutron.agent.dhcp.agent [None req-b5be1254-6ebb-48f0-ba4a-1de648d759db - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:09.194 263679 INFO neutron.agent.dhcp.agent [None req-b5be1254-6ebb-48f0-ba4a-1de648d759db - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:09 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:09.307 2 INFO neutron.agent.securitygroups_rpc [None req-bf52f67a-8b52-4e6b-9a4f-c7fa470a3b71 3247f4a1ec054de78018b025a6933ab5 13ab81953d004010a22a72d978d31c4d - - default default] Security group member updated ['b31260d7-60e9-40f6-abcc-b3b02fd41e3c']#033[00m Feb 23 04:57:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:09.463 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:09 localhost systemd[1]: var-lib-containers-storage-overlay-df2d26ea7203ac58308326d905910963253ee77dfc6ca25028d1f995a8018cb8-merged.mount: Deactivated successfully. Feb 23 04:57:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b95c9e316cb95f9f336d10c7293a34018717428485210584b630c90c6abe1ecb-userdata-shm.mount: Deactivated successfully. Feb 23 04:57:09 localhost systemd[1]: run-netns-qdhcp\x2da8f6a7ea\x2d1099\x2d486c\x2dbb34\x2d40942bc5a557.mount: Deactivated successfully. Feb 23 04:57:09 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:09.688 2 INFO neutron.agent.securitygroups_rpc [None req-459134b2-4c8a-4e1c-97e8-455a751256d3 d6a332b0f2e446a68dc6c19280644090 2207de28dcd245d2b198a56e6161001a - - default default] Security group member updated ['ba10f066-a353-4b6a-98b8-dab53422ee14']#033[00m Feb 23 04:57:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:10 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:10.523 2 INFO neutron.agent.securitygroups_rpc [None req-fe5ed929-0971-4d54-b2a5-f195cec2efa3 9a9c222b96714eb4b3e886d05c8c4dce 8aef873a00904cab867a4692ec3a78cb - - default default] Security group member updated ['58404006-bc14-42d2-ad0c-1fbb19168177']#033[00m Feb 23 04:57:10 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:10.713 2 INFO neutron.agent.securitygroups_rpc [None req-3f245d6d-a6d1-496c-9e86-70a6eb49954a d6a332b0f2e446a68dc6c19280644090 2207de28dcd245d2b198a56e6161001a - - default default] Security group member updated ['ba10f066-a353-4b6a-98b8-dab53422ee14']#033[00m Feb 23 04:57:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:11 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:11.795 2 INFO neutron.agent.securitygroups_rpc [None req-a8c239ff-03aa-42a5-97c5-b84d2c82f384 9a9c222b96714eb4b3e886d05c8c4dce 8aef873a00904cab867a4692ec3a78cb - - default default] Security group member updated ['58404006-bc14-42d2-ad0c-1fbb19168177']#033[00m Feb 23 04:57:12 localhost podman[241086]: time="2026-02-23T09:57:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:57:12 localhost podman[241086]: @ - - [23/Feb/2026:09:57:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1" Feb 23 04:57:12 localhost podman[241086]: @ - - [23/Feb/2026:09:57:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18286 "" "Go-http-client/1.1" Feb 23 04:57:12 localhost nova_compute[280321]: 2026-02-23 09:57:12.789 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:57:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:57:13 localhost podman[314276]: 2026-02-23 09:57:13.002638917 +0000 UTC m=+0.081149492 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0) Feb 23 04:57:13 localhost podman[314277]: 2026-02-23 09:57:13.064095946 +0000 UTC m=+0.138520636 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:57:13 localhost podman[314277]: 2026-02-23 09:57:13.072869614 +0000 UTC m=+0.147294264 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true) Feb 23 04:57:13 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:57:13 localhost podman[314276]: 2026-02-23 09:57:13.08646946 +0000 UTC m=+0.164979985 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_id=ovn_metadata_agent) Feb 23 04:57:13 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:57:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:13 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:13.526 2 INFO neutron.agent.securitygroups_rpc [None req-7a9cb938-6590-4a31-87fa-171ce7ffabc7 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['feaffec7-34aa-4c16-87f3-892bafdc2b78']#033[00m Feb 23 04:57:13 localhost nova_compute[280321]: 2026-02-23 09:57:13.589 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:14 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:14.208 2 INFO neutron.agent.securitygroups_rpc [None req-d957f830-6dae-4ea3-8f5c-2c9a9324e520 9a9c222b96714eb4b3e886d05c8c4dce 8aef873a00904cab867a4692ec3a78cb - - default default] Security group member updated ['58404006-bc14-42d2-ad0c-1fbb19168177']#033[00m Feb 23 04:57:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:16 localhost sshd[314313]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:57:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:17 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:17.712 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:31:67 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c762e206-cc42-4a9e-b8ad-4f8da87fd30e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c762e206-cc42-4a9e-b8ad-4f8da87fd30e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8532226521ac43ca82723a0b71168e03', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1cdd9294-518a-4cd4-8e14-e309ee77be41, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a147cb7a-5506-4d6a-9946-52357210b7c0) old=Port_Binding(mac=['fa:16:3e:df:31:67 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c762e206-cc42-4a9e-b8ad-4f8da87fd30e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c762e206-cc42-4a9e-b8ad-4f8da87fd30e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8532226521ac43ca82723a0b71168e03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:17 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:17.714 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a147cb7a-5506-4d6a-9946-52357210b7c0 in datapath c762e206-cc42-4a9e-b8ad-4f8da87fd30e updated#033[00m Feb 23 04:57:17 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:17.717 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c762e206-cc42-4a9e-b8ad-4f8da87fd30e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:57:17 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:17.718 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[b4cc513b-8ee5-4924-ba61-8938520c069d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:17 localhost nova_compute[280321]: 2026-02-23 09:57:17.820 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:57:18 localhost podman[314315]: 2026-02-23 09:57:18.013995427 +0000 UTC m=+0.081579104 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:57:18 localhost podman[314315]: 2026-02-23 09:57:18.025744106 +0000 UTC m=+0.093327723 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.031545) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840638031631, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2580, "num_deletes": 262, "total_data_size": 3486404, "memory_usage": 3543328, "flush_reason": "Manual Compaction"} Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Feb 23 04:57:18 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840638042929, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2261070, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20620, "largest_seqno": 23195, "table_properties": {"data_size": 2251713, "index_size": 5862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20328, "raw_average_key_size": 21, "raw_value_size": 2232616, "raw_average_value_size": 2332, "num_data_blocks": 255, "num_entries": 957, "num_filter_entries": 957, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840465, "oldest_key_time": 1771840465, "file_creation_time": 1771840638, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 11424 microseconds, and 5761 cpu microseconds. Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.042978) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2261070 bytes OK Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.043002) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.044906) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.044928) EVENT_LOG_v1 {"time_micros": 1771840638044922, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.044951) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3475042, prev total WAL file size 3475042, number of live WAL files 2. Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.045889) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2208KB)], [33(17MB)] Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840638045964, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 20138054, "oldest_snapshot_seqno": -1} Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12589 keys, 16805384 bytes, temperature: kUnknown Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840638125046, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 16805384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16732718, "index_size": 40117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 336956, "raw_average_key_size": 26, "raw_value_size": 16517363, "raw_average_value_size": 1312, "num_data_blocks": 1533, "num_entries": 12589, "num_filter_entries": 12589, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840638, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.125361) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 16805384 bytes Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.127064) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 254.3 rd, 212.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 17.0 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(16.3) write-amplify(7.4) OK, records in: 13126, records dropped: 537 output_compression: NoCompression Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.127093) EVENT_LOG_v1 {"time_micros": 1771840638127082, "job": 18, "event": "compaction_finished", "compaction_time_micros": 79178, "compaction_time_cpu_micros": 47346, "output_level": 6, "num_output_files": 1, "total_output_size": 16805384, "num_input_records": 13126, "num_output_records": 12589, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840638127620, "job": 18, "event": "table_file_deletion", "file_number": 35} Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840638130021, "job": 18, "event": "table_file_deletion", "file_number": 33} Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.045792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.130142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.130149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.130153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.130156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:57:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:57:18.130159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:57:18 localhost sshd[314338]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:57:18 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:18.546 2 INFO neutron.agent.securitygroups_rpc [None req-c9ef2d4b-d1c0-41a7-a7dd-2f93a7b3fec0 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['feaffec7-34aa-4c16-87f3-892bafdc2b78', '5909553e-06f7-4a4f-a61c-c51f18e5203a']#033[00m Feb 23 04:57:18 localhost nova_compute[280321]: 2026-02-23 09:57:18.591 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:19 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:19.292 2 INFO neutron.agent.securitygroups_rpc [None req-37a94c58-b8ba-4f2f-a685-f32e299cd9ba 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['5909553e-06f7-4a4f-a61c-c51f18e5203a']#033[00m Feb 23 04:57:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:57:20 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:57:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:57:20 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:57:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:57:20 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 5153c95f-0a10-48c1-9d82-2150513d8287 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:57:20 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 5153c95f-0a10-48c1-9d82-2150513d8287 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:57:20 localhost ceph-mgr[285904]: [progress INFO root] Completed event 5153c95f-0a10-48c1-9d82-2150513d8287 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 04:57:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:57:20 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:57:20 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:57:20 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:57:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:20 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:57:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:57:20 localhost nova_compute[280321]: 2026-02-23 09:57:20.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:57:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:21 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:57:21 localhost nova_compute[280321]: 2026-02-23 09:57:21.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:57:21 localhost nova_compute[280321]: 2026-02-23 09:57:21.910 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:57:21 localhost nova_compute[280321]: 2026-02-23 09:57:21.911 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:57:21 localhost nova_compute[280321]: 2026-02-23 09:57:21.911 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:57:21 localhost nova_compute[280321]: 2026-02-23 09:57:21.911 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:57:21 localhost nova_compute[280321]: 2026-02-23 09:57:21.912 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:57:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:57:22 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1058112558' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.349 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.573 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.576 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11670MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.576 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.577 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.668 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.669 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.706 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.862 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:22 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:22.886 263679 INFO neutron.agent.linux.ip_lib [None req-8493c10e-9ea0-4cae-a501-43796a79df24 - - - - - -] Device tap2d1ea4c8-80 cannot be used as it has no MAC address#033[00m Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.914 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:22 localhost kernel: device tap2d1ea4c8-80 entered promiscuous mode Feb 23 04:57:22 localhost NetworkManager[5987]: [1771840642.9229] manager: (tap2d1ea4c8-80): new Generic device (/org/freedesktop/NetworkManager/Devices/42) Feb 23 04:57:22 localhost systemd-udevd[314478]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.931 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:22 localhost ovn_controller[155966]: 2026-02-23T09:57:22Z|00204|binding|INFO|Claiming lport 2d1ea4c8-8078-429c-9927-5791a7926683 for this chassis. Feb 23 04:57:22 localhost ovn_controller[155966]: 2026-02-23T09:57:22Z|00205|binding|INFO|2d1ea4c8-8078-429c-9927-5791a7926683: Claiming unknown Feb 23 04:57:22 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:22.956 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-07a36288-c7ac-482a-a7d6-9eab612cc697', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07a36288-c7ac-482a-a7d6-9eab612cc697', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c20cfef0bfce40e6a22d84a91482f1ac', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef6f883c-8ea0-4d4e-8a89-6a33a00b6a96, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2d1ea4c8-8078-429c-9927-5791a7926683) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:22 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:22.959 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 2d1ea4c8-8078-429c-9927-5791a7926683 in datapath 07a36288-c7ac-482a-a7d6-9eab612cc697 bound to our chassis#033[00m Feb 23 04:57:22 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:22.962 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 07a36288-c7ac-482a-a7d6-9eab612cc697 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:57:22 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:22.963 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[57d58d60-2e3c-4acd-bfc3-2b65d7fb387d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:22 localhost journal[229268]: ethtool ioctl error on tap2d1ea4c8-80: No such device Feb 23 04:57:22 localhost ovn_controller[155966]: 2026-02-23T09:57:22Z|00206|binding|INFO|Setting lport 2d1ea4c8-8078-429c-9927-5791a7926683 ovn-installed in OVS Feb 23 04:57:22 localhost ovn_controller[155966]: 2026-02-23T09:57:22Z|00207|binding|INFO|Setting lport 2d1ea4c8-8078-429c-9927-5791a7926683 up in Southbound Feb 23 04:57:22 localhost nova_compute[280321]: 2026-02-23 09:57:22.977 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:22 localhost journal[229268]: ethtool ioctl error on tap2d1ea4c8-80: No such device Feb 23 04:57:22 localhost journal[229268]: ethtool ioctl error on tap2d1ea4c8-80: No such device Feb 23 04:57:22 localhost journal[229268]: ethtool ioctl error on tap2d1ea4c8-80: No such device Feb 23 04:57:22 localhost journal[229268]: ethtool ioctl error on tap2d1ea4c8-80: No such device Feb 23 04:57:22 localhost journal[229268]: ethtool ioctl error on tap2d1ea4c8-80: No such device Feb 23 04:57:23 localhost journal[229268]: ethtool ioctl error on tap2d1ea4c8-80: No such device Feb 23 04:57:23 localhost journal[229268]: ethtool ioctl error on tap2d1ea4c8-80: No such device Feb 23 04:57:23 localhost nova_compute[280321]: 2026-02-23 09:57:23.017 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:23 localhost nova_compute[280321]: 2026-02-23 09:57:23.044 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:23 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:57:23 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/356798990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:57:23 localhost nova_compute[280321]: 2026-02-23 09:57:23.226 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:57:23 localhost nova_compute[280321]: 2026-02-23 09:57:23.232 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:57:23 localhost nova_compute[280321]: 2026-02-23 09:57:23.257 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:57:23 localhost nova_compute[280321]: 2026-02-23 09:57:23.260 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:57:23 localhost nova_compute[280321]: 2026-02-23 09:57:23.261 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:57:23 localhost nova_compute[280321]: 2026-02-23 09:57:23.592 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:23 localhost podman[314551]: Feb 23 04:57:23 localhost podman[314551]: 2026-02-23 09:57:23.845483617 +0000 UTC m=+0.093484568 container create 40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07a36288-c7ac-482a-a7d6-9eab612cc697, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 04:57:23 localhost podman[314551]: 2026-02-23 09:57:23.796664855 +0000 UTC m=+0.044665846 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:57:23 localhost systemd[1]: Started libpod-conmon-40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47.scope. Feb 23 04:57:23 localhost systemd[1]: Started libcrun container. Feb 23 04:57:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cde89064522921c7515a3a0e6cb2579c9b26c07b9f7af297067aff08dee4dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:57:23 localhost podman[314551]: 2026-02-23 09:57:23.931483296 +0000 UTC m=+0.179484267 container init 40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07a36288-c7ac-482a-a7d6-9eab612cc697, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:57:23 localhost podman[314551]: 2026-02-23 09:57:23.940503992 +0000 UTC m=+0.188504953 container start 40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07a36288-c7ac-482a-a7d6-9eab612cc697, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:57:23 localhost dnsmasq[314569]: started, version 2.85 cachesize 150 Feb 23 04:57:23 localhost dnsmasq[314569]: DNS service limited to local subnets Feb 23 04:57:23 localhost dnsmasq[314569]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:57:23 localhost dnsmasq[314569]: warning: no upstream servers configured Feb 23 04:57:23 localhost dnsmasq-dhcp[314569]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:57:23 localhost dnsmasq[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/addn_hosts - 0 addresses Feb 23 04:57:23 localhost dnsmasq-dhcp[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/host Feb 23 04:57:23 localhost dnsmasq-dhcp[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/opts Feb 23 04:57:24 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:24.058 263679 INFO neutron.agent.dhcp.agent [None req-2d56e4df-fd38-4cb7-9b2b-ba3595f0f5c8 - - - - - -] DHCP configuration for ports {'a61aa874-bde8-47cd-ab73-5e7a2321040e'} is completed#033[00m Feb 23 04:57:24 localhost nova_compute[280321]: 2026-02-23 09:57:24.262 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:57:24 localhost nova_compute[280321]: 2026-02-23 09:57:24.262 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:57:24 localhost nova_compute[280321]: 2026-02-23 09:57:24.263 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:57:24 localhost nova_compute[280321]: 2026-02-23 09:57:24.284 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:57:24 localhost nova_compute[280321]: 2026-02-23 09:57:24.284 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:57:24 localhost nova_compute[280321]: 2026-02-23 09:57:24.285 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:57:24 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:24.753 2 INFO neutron.agent.securitygroups_rpc [None req-3b70b1d6-deb1-439a-bfa9-157fec652445 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['c0f5985b-58d8-49b8-86bf-b98bcb003892']#033[00m Feb 23 04:57:24 localhost nova_compute[280321]: 2026-02-23 09:57:24.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:57:24 localhost nova_compute[280321]: 2026-02-23 09:57:24.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:57:24 localhost nova_compute[280321]: 2026-02-23 09:57:24.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:57:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e128 e128: 6 total, 6 up, 6 in Feb 23 04:57:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail Feb 23 04:57:25 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:25.189 2 INFO neutron.agent.securitygroups_rpc [None req-92aad836-c088-4438-915c-a0369f05fd7b 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['c0f5985b-58d8-49b8-86bf-b98bcb003892']#033[00m Feb 23 04:57:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:25 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:25.803 2 INFO neutron.agent.securitygroups_rpc [None req-76b2a14f-bec0-4305-8207-ebe5d9da3f5c 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['4833eb66-b753-4947-9fd8-0b38ba04d2e6']#033[00m Feb 23 04:57:25 localhost nova_compute[280321]: 2026-02-23 09:57:25.888 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:57:25 localhost nova_compute[280321]: 2026-02-23 09:57:25.909 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:57:26 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:26.891 2 INFO neutron.agent.securitygroups_rpc [None req-0bf5f6c8-9298-4358-bc07-570223771cfc 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['170ab605-ad28-44f3-a2b8-68fbb6ed92e0']#033[00m Feb 23 04:57:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s Feb 23 04:57:27 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:27.735 2 INFO neutron.agent.securitygroups_rpc [None req-f7ddc684-702a-4c23-8012-3babeac19865 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['170ab605-ad28-44f3-a2b8-68fbb6ed92e0']#033[00m Feb 23 04:57:27 localhost nova_compute[280321]: 2026-02-23 09:57:27.864 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:28 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:28.076 2 INFO neutron.agent.securitygroups_rpc [None req-40990976-1223-4b33-bf88-75d4d256e3c2 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['170ab605-ad28-44f3-a2b8-68fbb6ed92e0']#033[00m Feb 23 04:57:28 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:28.309 2 INFO neutron.agent.securitygroups_rpc [None req-db0e7eff-9ed0-4b03-af25-1355357a4e27 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['170ab605-ad28-44f3-a2b8-68fbb6ed92e0']#033[00m Feb 23 04:57:28 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:28.563 2 INFO neutron.agent.securitygroups_rpc [None req-31c5f26a-a932-4879-896f-b6452f801d39 3247f4a1ec054de78018b025a6933ab5 13ab81953d004010a22a72d978d31c4d - - default default] Security group member updated ['b31260d7-60e9-40f6-abcc-b3b02fd41e3c']#033[00m Feb 23 04:57:28 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:28.580 2 INFO neutron.agent.securitygroups_rpc [None req-6e46d629-2807-49ac-9764-efa5e97c15d9 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['170ab605-ad28-44f3-a2b8-68fbb6ed92e0']#033[00m Feb 23 04:57:28 localhost nova_compute[280321]: 2026-02-23 09:57:28.598 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:28 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:28.844 2 INFO neutron.agent.securitygroups_rpc [None req-6e38729b-cba3-4aa6-8317-eef924b8040d 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['170ab605-ad28-44f3-a2b8-68fbb6ed92e0']#033[00m Feb 23 04:57:28 localhost nova_compute[280321]: 2026-02-23 09:57:28.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:57:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s Feb 23 04:57:29 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:29.360 2 INFO neutron.agent.securitygroups_rpc [None req-08211a26-5630-4885-b1ea-4c86be58309d 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['170ab605-ad28-44f3-a2b8-68fbb6ed92e0']#033[00m Feb 23 04:57:29 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:29.447 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:28Z, description=, device_id=410e2493-b221-4a83-a2ae-4a036258cf95, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c80eec0c-daad-4e5f-b0b4-37ec00a89bf0, ip_allocation=immediate, mac_address=fa:16:3e:1d:ca:ac, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:20Z, description=, dns_domain=, id=07a36288-c7ac-482a-a7d6-9eab612cc697, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-313045746-network, port_security_enabled=True, project_id=c20cfef0bfce40e6a22d84a91482f1ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8640, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1648, status=ACTIVE, subnets=['41c07b2b-59e8-4231-aeef-d162fbe3e388'], tags=[], tenant_id=c20cfef0bfce40e6a22d84a91482f1ac, updated_at=2026-02-23T09:57:21Z, vlan_transparent=None, network_id=07a36288-c7ac-482a-a7d6-9eab612cc697, port_security_enabled=False, project_id=c20cfef0bfce40e6a22d84a91482f1ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1721, status=DOWN, tags=[], tenant_id=c20cfef0bfce40e6a22d84a91482f1ac, updated_at=2026-02-23T09:57:29Z on network 07a36288-c7ac-482a-a7d6-9eab612cc697#033[00m Feb 23 04:57:29 localhost dnsmasq[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/addn_hosts - 1 addresses Feb 23 04:57:29 localhost podman[314587]: 2026-02-23 09:57:29.641540153 +0000 UTC m=+0.053414984 container kill 40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07a36288-c7ac-482a-a7d6-9eab612cc697, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:57:29 localhost dnsmasq-dhcp[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/host Feb 23 04:57:29 localhost dnsmasq-dhcp[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/opts Feb 23 04:57:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:29.807 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:31:67 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-c762e206-cc42-4a9e-b8ad-4f8da87fd30e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c762e206-cc42-4a9e-b8ad-4f8da87fd30e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8532226521ac43ca82723a0b71168e03', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1cdd9294-518a-4cd4-8e14-e309ee77be41, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a147cb7a-5506-4d6a-9946-52357210b7c0) old=Port_Binding(mac=['fa:16:3e:df:31:67 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c762e206-cc42-4a9e-b8ad-4f8da87fd30e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c762e206-cc42-4a9e-b8ad-4f8da87fd30e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8532226521ac43ca82723a0b71168e03', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:29.810 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a147cb7a-5506-4d6a-9946-52357210b7c0 in datapath c762e206-cc42-4a9e-b8ad-4f8da87fd30e updated#033[00m Feb 23 04:57:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:29.811 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c762e206-cc42-4a9e-b8ad-4f8da87fd30e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:57:29 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:29.812 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[51c92aff-d948-47cb-87c0-7e91d6f87d69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:30 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:30.205 263679 INFO neutron.agent.dhcp.agent [None req-9428af45-c2fc-4045-bb40-e14b5c6685fb - - - - - -] DHCP configuration for ports {'c80eec0c-daad-4e5f-b0b4-37ec00a89bf0'} is completed#033[00m Feb 23 04:57:30 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:30.206 2 INFO neutron.agent.securitygroups_rpc [None req-73652ffc-e47d-466c-823f-eff68fd895b7 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['170ab605-ad28-44f3-a2b8-68fbb6ed92e0']#033[00m Feb 23 04:57:30 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:30.207 2 INFO neutron.agent.securitygroups_rpc [None req-fa7b29ab-7d8a-4213-9062-a27a98eacca6 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['170ab605-ad28-44f3-a2b8-68fbb6ed92e0']#033[00m Feb 23 04:57:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 169 MiB data, 858 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 2.4 MiB/s wr, 24 op/s Feb 23 04:57:31 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:31.273 2 INFO neutron.agent.securitygroups_rpc [None req-9681c33b-6d83-4d64-b717-f3c8315322b0 3247f4a1ec054de78018b025a6933ab5 13ab81953d004010a22a72d978d31c4d - - default default] Security group member updated ['b31260d7-60e9-40f6-abcc-b3b02fd41e3c']#033[00m Feb 23 04:57:31 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:31.403 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:31 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:31.404 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:57:31 localhost nova_compute[280321]: 2026-02-23 09:57:31.410 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:31 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:31.448 2 INFO neutron.agent.securitygroups_rpc [None req-e145e96f-8a3e-446a-b7a0-f3b217974e58 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['170ab605-ad28-44f3-a2b8-68fbb6ed92e0']#033[00m Feb 23 04:57:31 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:31.859 2 INFO neutron.agent.securitygroups_rpc [None req-3adfef6e-10be-4bfb-8ba9-878d58df2fe8 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['dd2fbf50-7988-4b4c-88a5-46b24a60bfee', 'b620ad4c-7f28-46c7-a322-d11687a2bc43', '4833eb66-b753-4947-9fd8-0b38ba04d2e6']#033[00m Feb 23 04:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:57:31 localhost openstack_network_exporter[243519]: ERROR 09:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:57:31 localhost openstack_network_exporter[243519]: Feb 23 04:57:31 localhost openstack_network_exporter[243519]: ERROR 09:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:57:31 localhost openstack_network_exporter[243519]: Feb 23 04:57:32 localhost systemd[1]: tmp-crun.jgWkhd.mount: Deactivated successfully. Feb 23 04:57:32 localhost podman[314609]: 2026-02-23 09:57:32.030489541 +0000 UTC m=+0.103009760 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:57:32 localhost podman[314610]: 2026-02-23 09:57:32.068327648 +0000 UTC m=+0.138409042 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2026-02-05T04:57:10Z, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, vendor=Red Hat, Inc., config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 04:57:32 localhost podman[314610]: 2026-02-23 09:57:32.082708968 +0000 UTC m=+0.152790362 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc.) Feb 23 04:57:32 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:57:32 localhost podman[314609]: 2026-02-23 09:57:32.123800323 +0000 UTC m=+0.196320512 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:57:32 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:57:32 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:32.396 263679 INFO neutron.agent.linux.ip_lib [None req-d0073d6a-8fa1-45e5-bad6-86d63f380b8e - - - - - -] Device tapd0f1fced-a9 cannot be used as it has no MAC address#033[00m Feb 23 04:57:32 localhost nova_compute[280321]: 2026-02-23 09:57:32.423 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:32 localhost kernel: device tapd0f1fced-a9 entered promiscuous mode Feb 23 04:57:32 localhost NetworkManager[5987]: [1771840652.4325] manager: (tapd0f1fced-a9): new Generic device (/org/freedesktop/NetworkManager/Devices/43) Feb 23 04:57:32 localhost ovn_controller[155966]: 2026-02-23T09:57:32Z|00208|binding|INFO|Claiming lport d0f1fced-a9c1-4eee-8390-3e659cbea67e for this chassis. Feb 23 04:57:32 localhost ovn_controller[155966]: 2026-02-23T09:57:32Z|00209|binding|INFO|d0f1fced-a9c1-4eee-8390-3e659cbea67e: Claiming unknown Feb 23 04:57:32 localhost nova_compute[280321]: 2026-02-23 09:57:32.433 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:32 localhost systemd-udevd[314658]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:57:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:32.447 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-d22eb03f-011a-4956-a118-fceff94fe5e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d22eb03f-011a-4956-a118-fceff94fe5e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cd32cd9-96f1-415e-9cc6-355cd9bc5cdd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d0f1fced-a9c1-4eee-8390-3e659cbea67e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:32.448 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d0f1fced-a9c1-4eee-8390-3e659cbea67e in datapath d22eb03f-011a-4956-a118-fceff94fe5e6 bound to our chassis#033[00m Feb 23 04:57:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:32.450 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port deaaaf58-72b2-453c-acdd-9278409048b6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:57:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:32.450 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d22eb03f-011a-4956-a118-fceff94fe5e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:57:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:32.451 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[fc458558-ef52-4ecd-85ab-a9694ca5b602]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:32 localhost journal[229268]: ethtool ioctl error on tapd0f1fced-a9: No such device Feb 23 04:57:32 localhost ovn_controller[155966]: 2026-02-23T09:57:32Z|00210|binding|INFO|Setting lport d0f1fced-a9c1-4eee-8390-3e659cbea67e ovn-installed in OVS Feb 23 04:57:32 localhost ovn_controller[155966]: 2026-02-23T09:57:32Z|00211|binding|INFO|Setting lport d0f1fced-a9c1-4eee-8390-3e659cbea67e up in Southbound Feb 23 04:57:32 localhost journal[229268]: ethtool ioctl error on tapd0f1fced-a9: No such device Feb 23 04:57:32 localhost nova_compute[280321]: 2026-02-23 09:57:32.474 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:32 localhost journal[229268]: ethtool ioctl error on tapd0f1fced-a9: No such device Feb 23 04:57:32 localhost journal[229268]: ethtool ioctl error on tapd0f1fced-a9: No such device Feb 23 04:57:32 localhost journal[229268]: ethtool ioctl error on tapd0f1fced-a9: No such device Feb 23 04:57:32 localhost journal[229268]: ethtool ioctl error on tapd0f1fced-a9: No such device Feb 23 04:57:32 localhost journal[229268]: ethtool ioctl error on tapd0f1fced-a9: No such device Feb 23 04:57:32 localhost journal[229268]: ethtool ioctl error on tapd0f1fced-a9: No such device Feb 23 04:57:32 localhost nova_compute[280321]: 2026-02-23 09:57:32.508 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:32 localhost nova_compute[280321]: 2026-02-23 09:57:32.532 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:32 localhost nova_compute[280321]: 2026-02-23 09:57:32.865 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 209 MiB data, 946 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 6.4 MiB/s wr, 36 op/s Feb 23 04:57:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:33.301 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:28Z, description=, device_id=410e2493-b221-4a83-a2ae-4a036258cf95, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c80eec0c-daad-4e5f-b0b4-37ec00a89bf0, ip_allocation=immediate, mac_address=fa:16:3e:1d:ca:ac, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:20Z, description=, dns_domain=, id=07a36288-c7ac-482a-a7d6-9eab612cc697, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-313045746-network, port_security_enabled=True, project_id=c20cfef0bfce40e6a22d84a91482f1ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8640, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1648, status=ACTIVE, subnets=['41c07b2b-59e8-4231-aeef-d162fbe3e388'], tags=[], tenant_id=c20cfef0bfce40e6a22d84a91482f1ac, updated_at=2026-02-23T09:57:21Z, vlan_transparent=None, network_id=07a36288-c7ac-482a-a7d6-9eab612cc697, port_security_enabled=False, project_id=c20cfef0bfce40e6a22d84a91482f1ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1721, status=DOWN, tags=[], tenant_id=c20cfef0bfce40e6a22d84a91482f1ac, updated_at=2026-02-23T09:57:29Z on network 07a36288-c7ac-482a-a7d6-9eab612cc697#033[00m Feb 23 04:57:33 localhost podman[314729]: Feb 23 04:57:33 localhost podman[314729]: 2026-02-23 09:57:33.388524715 +0000 UTC m=+0.097953006 container create a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d22eb03f-011a-4956-a118-fceff94fe5e6, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:57:33 localhost systemd[1]: Started libpod-conmon-a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d.scope. Feb 23 04:57:33 localhost podman[314729]: 2026-02-23 09:57:33.337176924 +0000 UTC m=+0.046605225 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:57:33 localhost systemd[1]: Started libcrun container. Feb 23 04:57:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa94efad642d7af16ba9f62e926fa95e0060f3c44ad5b5dccb11d6a3f387ba5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:57:33 localhost podman[314729]: 2026-02-23 09:57:33.460500985 +0000 UTC m=+0.169929286 container init a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d22eb03f-011a-4956-a118-fceff94fe5e6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:57:33 localhost podman[314729]: 2026-02-23 09:57:33.469661205 +0000 UTC m=+0.179089496 container start a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d22eb03f-011a-4956-a118-fceff94fe5e6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:57:33 localhost dnsmasq[314768]: started, version 2.85 cachesize 150 Feb 23 04:57:33 localhost dnsmasq[314768]: DNS service limited to local subnets Feb 23 04:57:33 localhost dnsmasq[314768]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:57:33 localhost dnsmasq[314768]: warning: no upstream servers configured Feb 23 04:57:33 localhost dnsmasq-dhcp[314768]: DHCP, static leases only on 10.101.0.0, lease time 1d Feb 23 04:57:33 localhost dnsmasq[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/addn_hosts - 0 addresses Feb 23 04:57:33 localhost dnsmasq-dhcp[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/host Feb 23 04:57:33 localhost dnsmasq-dhcp[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/opts Feb 23 04:57:33 localhost podman[314761]: 2026-02-23 09:57:33.529449302 +0000 UTC m=+0.064228424 container kill 40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07a36288-c7ac-482a-a7d6-9eab612cc697, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 23 04:57:33 localhost dnsmasq[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/addn_hosts - 1 addresses Feb 23 04:57:33 localhost dnsmasq-dhcp[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/host Feb 23 04:57:33 localhost dnsmasq-dhcp[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/opts Feb 23 04:57:33 localhost nova_compute[280321]: 2026-02-23 09:57:33.634 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:33 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:33.636 2 INFO neutron.agent.securitygroups_rpc [None req-e46f16e8-8d8a-4032-be19-52144050a32d 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['dd2fbf50-7988-4b4c-88a5-46b24a60bfee', 'b620ad4c-7f28-46c7-a322-d11687a2bc43']#033[00m Feb 23 04:57:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:33.695 263679 INFO neutron.agent.dhcp.agent [None req-470018fb-2635-4090-bbc6-874d24d08511 - - - - - -] DHCP configuration for ports {'ada5a88d-7046-49d6-92e5-9317f862fe0d'} is completed#033[00m Feb 23 04:57:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:33.843 263679 INFO neutron.agent.dhcp.agent [None req-7cdf3b68-3283-4b6d-9924-a42e49e4b2f7 - - - - - -] DHCP configuration for ports {'c80eec0c-daad-4e5f-b0b4-37ec00a89bf0'} is completed#033[00m Feb 23 04:57:33 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:33.911 2 INFO neutron.agent.securitygroups_rpc [None req-a5ef7891-4d9f-416c-9e05-7114c855bf40 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['b8e8f331-db67-4dd9-802a-05abdcea8dd4']#033[00m Feb 23 04:57:34 localhost dnsmasq[313870]: exiting on receipt of SIGTERM Feb 23 04:57:34 localhost podman[314800]: 2026-02-23 09:57:34.280246183 +0000 UTC m=+0.058609293 container kill 6839d6e5105b6e019fc7b446758235add5fe731ce32ec4739df13bd00cec4232 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df9a0c8e-e671-4b2e-aa14-898ba9b89fcb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:57:34 localhost systemd[1]: libpod-6839d6e5105b6e019fc7b446758235add5fe731ce32ec4739df13bd00cec4232.scope: Deactivated successfully. Feb 23 04:57:34 localhost podman[314814]: 2026-02-23 09:57:34.338003019 +0000 UTC m=+0.040382446 container died 6839d6e5105b6e019fc7b446758235add5fe731ce32ec4739df13bd00cec4232 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df9a0c8e-e671-4b2e-aa14-898ba9b89fcb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:57:34 localhost podman[314814]: 2026-02-23 09:57:34.381852649 +0000 UTC m=+0.084232076 container remove 6839d6e5105b6e019fc7b446758235add5fe731ce32ec4739df13bd00cec4232 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df9a0c8e-e671-4b2e-aa14-898ba9b89fcb, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:57:34 localhost systemd[1]: var-lib-containers-storage-overlay-4118eb7206b18f6ccdcc973c85dc60f9024dbc1a5859e63e84e6e36b63acb583-merged.mount: Deactivated successfully. Feb 23 04:57:34 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6839d6e5105b6e019fc7b446758235add5fe731ce32ec4739df13bd00cec4232-userdata-shm.mount: Deactivated successfully. Feb 23 04:57:34 localhost nova_compute[280321]: 2026-02-23 09:57:34.397 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:34 localhost ovn_controller[155966]: 2026-02-23T09:57:34Z|00212|binding|INFO|Releasing lport 9bbc947d-cd82-4424-9901-5722da248b69 from this chassis (sb_readonly=0) Feb 23 04:57:34 localhost ovn_controller[155966]: 2026-02-23T09:57:34Z|00213|binding|INFO|Setting lport 9bbc947d-cd82-4424-9901-5722da248b69 down in Southbound Feb 23 04:57:34 localhost kernel: device tap9bbc947d-cd left promiscuous mode Feb 23 04:57:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e129 e129: 6 total, 6 up, 6 in Feb 23 04:57:34 localhost systemd[1]: libpod-conmon-6839d6e5105b6e019fc7b446758235add5fe731ce32ec4739df13bd00cec4232.scope: Deactivated successfully. Feb 23 04:57:34 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:34.406 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-df9a0c8e-e671-4b2e-aa14-898ba9b89fcb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df9a0c8e-e671-4b2e-aa14-898ba9b89fcb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ab81953d004010a22a72d978d31c4d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12036df7-8f39-4f64-b41c-6015f258cddb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9bbc947d-cd82-4424-9901-5722da248b69) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:34 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:34.409 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 9bbc947d-cd82-4424-9901-5722da248b69 in datapath df9a0c8e-e671-4b2e-aa14-898ba9b89fcb unbound from our chassis#033[00m Feb 23 04:57:34 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:34.411 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df9a0c8e-e671-4b2e-aa14-898ba9b89fcb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:57:34 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:34.412 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[da55d864-32f6-4540-9d87-04c5f1163ade]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:34 localhost nova_compute[280321]: 2026-02-23 09:57:34.417 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:34 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:34.649 263679 INFO neutron.agent.dhcp.agent [None req-16b5598e-2314-4570-8923-94e7ad494228 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:34 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:34.651 263679 INFO neutron.agent.dhcp.agent [None req-16b5598e-2314-4570-8923-94e7ad494228 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:34 localhost systemd[1]: run-netns-qdhcp\x2ddf9a0c8e\x2de671\x2d4b2e\x2daa14\x2d898ba9b89fcb.mount: Deactivated successfully. Feb 23 04:57:34 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:34.781 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:34 localhost ovn_controller[155966]: 2026-02-23T09:57:34Z|00214|ovn_bfd|INFO|Enabled BFD on interface ovn-5b0126-0 Feb 23 04:57:34 localhost ovn_controller[155966]: 2026-02-23T09:57:34Z|00215|ovn_bfd|INFO|Enabled BFD on interface ovn-585d62-0 Feb 23 04:57:34 localhost ovn_controller[155966]: 2026-02-23T09:57:34Z|00216|ovn_bfd|INFO|Enabled BFD on interface ovn-b9c72d-0 Feb 23 04:57:34 localhost nova_compute[280321]: 2026-02-23 09:57:34.900 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:34 localhost nova_compute[280321]: 2026-02-23 09:57:34.904 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:34 localhost nova_compute[280321]: 2026-02-23 09:57:34.905 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:34 localhost nova_compute[280321]: 2026-02-23 09:57:34.907 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:34 localhost nova_compute[280321]: 2026-02-23 09:57:34.920 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:34 localhost nova_compute[280321]: 2026-02-23 09:57:34.943 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:34 localhost nova_compute[280321]: 2026-02-23 09:57:34.963 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:57:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:57:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:57:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:57:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 209 MiB data, 946 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 6.4 MiB/s wr, 36 op/s Feb 23 04:57:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:57:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:57:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e130 e130: 6 total, 6 up, 6 in Feb 23 04:57:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:35 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:35.814 2 INFO neutron.agent.securitygroups_rpc [None req-db00dc8c-6ada-4161-9538-1913f2da78b1 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['a98db953-42d0-4f19-90d3-a50bfc8bf55e']#033[00m Feb 23 04:57:35 localhost nova_compute[280321]: 2026-02-23 09:57:35.824 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:57:36 localhost podman[314840]: 2026-02-23 09:57:36.006204503 +0000 UTC m=+0.080268934 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:57:36 localhost podman[314840]: 2026-02-23 09:57:36.06890136 +0000 UTC m=+0.142965801 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:57:36 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:57:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:36.291 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:35Z, description=, device_id=7647edfc-3361-4769-b093-9cecdc6821d1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3aac799d-48d1-4e38-ae70-278943f5c0ea, ip_allocation=immediate, mac_address=fa:16:3e:da:cd:06, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:23Z, description=, dns_domain=, id=d22eb03f-011a-4956-a118-fceff94fe5e6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1549621585, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28257, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1691, status=ACTIVE, subnets=['a50e6ca7-020b-4e8e-8486-490cf432aefb'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:57:29Z, vlan_transparent=None, network_id=d22eb03f-011a-4956-a118-fceff94fe5e6, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1758, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:57:35Z on network d22eb03f-011a-4956-a118-fceff94fe5e6#033[00m Feb 23 04:57:36 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Feb 23 04:57:36 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e131 e131: 6 total, 6 up, 6 in Feb 23 04:57:36 localhost nova_compute[280321]: 2026-02-23 09:57:36.473 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:36 localhost systemd[1]: tmp-crun.C599sH.mount: Deactivated successfully. Feb 23 04:57:36 localhost podman[314882]: 2026-02-23 09:57:36.503838155 +0000 UTC m=+0.072217439 container kill a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d22eb03f-011a-4956-a118-fceff94fe5e6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:57:36 localhost dnsmasq[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/addn_hosts - 1 addresses Feb 23 04:57:36 localhost dnsmasq-dhcp[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/host Feb 23 04:57:36 localhost dnsmasq-dhcp[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/opts Feb 23 04:57:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:36.723 263679 INFO neutron.agent.dhcp.agent [None req-0520036b-63ed-46a9-8ad2-120e3a794b87 - - - - - -] DHCP configuration for ports {'3aac799d-48d1-4e38-ae70-278943f5c0ea'} is completed#033[00m Feb 23 04:57:36 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:36.864 2 INFO neutron.agent.securitygroups_rpc [None req-e2d7d9c7-9cf6-4713-8a6e-85619a152752 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['a98db953-42d0-4f19-90d3-a50bfc8bf55e']#033[00m Feb 23 04:57:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:37.147 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:35Z, description=, device_id=7647edfc-3361-4769-b093-9cecdc6821d1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3aac799d-48d1-4e38-ae70-278943f5c0ea, ip_allocation=immediate, mac_address=fa:16:3e:da:cd:06, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:23Z, description=, dns_domain=, id=d22eb03f-011a-4956-a118-fceff94fe5e6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1549621585, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28257, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1691, status=ACTIVE, subnets=['a50e6ca7-020b-4e8e-8486-490cf432aefb'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:57:29Z, vlan_transparent=None, network_id=d22eb03f-011a-4956-a118-fceff94fe5e6, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1758, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:57:35Z on network d22eb03f-011a-4956-a118-fceff94fe5e6#033[00m Feb 23 04:57:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 15 MiB/s wr, 116 op/s Feb 23 04:57:37 localhost dnsmasq[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/addn_hosts - 1 addresses Feb 23 04:57:37 localhost dnsmasq-dhcp[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/host Feb 23 04:57:37 localhost podman[314919]: 2026-02-23 09:57:37.347155174 +0000 UTC m=+0.044637716 container kill a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d22eb03f-011a-4956-a118-fceff94fe5e6, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:57:37 localhost dnsmasq-dhcp[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/opts Feb 23 04:57:37 localhost systemd[1]: tmp-crun.H4Qh7R.mount: Deactivated successfully. Feb 23 04:57:37 localhost ovn_controller[155966]: 2026-02-23T09:57:37Z|00217|ovn_bfd|INFO|Disabled BFD on interface ovn-5b0126-0 Feb 23 04:57:37 localhost ovn_controller[155966]: 2026-02-23T09:57:37Z|00218|ovn_bfd|INFO|Disabled BFD on interface ovn-585d62-0 Feb 23 04:57:37 localhost ovn_controller[155966]: 2026-02-23T09:57:37Z|00219|ovn_bfd|INFO|Disabled BFD on interface ovn-b9c72d-0 Feb 23 04:57:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:37.950 263679 INFO neutron.agent.dhcp.agent [None req-769bf26a-3a19-47ba-9fe7-79d87becd273 - - - - - -] DHCP configuration for ports {'3aac799d-48d1-4e38-ae70-278943f5c0ea'} is completed#033[00m Feb 23 04:57:37 localhost nova_compute[280321]: 2026-02-23 09:57:37.985 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:37 localhost nova_compute[280321]: 2026-02-23 09:57:37.995 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:37 localhost nova_compute[280321]: 2026-02-23 09:57:37.997 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:38 localhost dnsmasq[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/addn_hosts - 0 addresses Feb 23 04:57:38 localhost podman[314957]: 2026-02-23 09:57:38.086703371 +0000 UTC m=+0.047669388 container kill 40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07a36288-c7ac-482a-a7d6-9eab612cc697, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 23 04:57:38 localhost dnsmasq-dhcp[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/host Feb 23 04:57:38 localhost dnsmasq-dhcp[314569]: read /var/lib/neutron/dhcp/07a36288-c7ac-482a-a7d6-9eab612cc697/opts Feb 23 04:57:38 localhost systemd[1]: tmp-crun.aHjNeo.mount: Deactivated successfully. Feb 23 04:57:38 localhost nova_compute[280321]: 2026-02-23 09:57:38.322 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:38 localhost ovn_controller[155966]: 2026-02-23T09:57:38Z|00220|binding|INFO|Releasing lport 2d1ea4c8-8078-429c-9927-5791a7926683 from this chassis (sb_readonly=0) Feb 23 04:57:38 localhost ovn_controller[155966]: 2026-02-23T09:57:38Z|00221|binding|INFO|Setting lport 2d1ea4c8-8078-429c-9927-5791a7926683 down in Southbound Feb 23 04:57:38 localhost kernel: device tap2d1ea4c8-80 left promiscuous mode Feb 23 04:57:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:38.335 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-07a36288-c7ac-482a-a7d6-9eab612cc697', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07a36288-c7ac-482a-a7d6-9eab612cc697', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c20cfef0bfce40e6a22d84a91482f1ac', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef6f883c-8ea0-4d4e-8a89-6a33a00b6a96, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2d1ea4c8-8078-429c-9927-5791a7926683) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:38.337 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 2d1ea4c8-8078-429c-9927-5791a7926683 in datapath 07a36288-c7ac-482a-a7d6-9eab612cc697 unbound from our chassis#033[00m Feb 23 04:57:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:38.340 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 07a36288-c7ac-482a-a7d6-9eab612cc697, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:57:38 localhost nova_compute[280321]: 2026-02-23 09:57:38.568 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:38.568 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[f508c5d1-2906-40c6-acc6-6df0c42741dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:38 localhost nova_compute[280321]: 2026-02-23 09:57:38.636 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:39 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:39.028 2 INFO neutron.agent.securitygroups_rpc [None req-4377d041-aa51-4800-a96f-871d02773dd7 7e2d209f03d04e6d86f4e10f7490cf37 8532226521ac43ca82723a0b71168e03 - - default default] Security group member updated ['709ad995-bfde-4096-a0b4-2ba30248a611']#033[00m Feb 23 04:57:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 145 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 8.0 MiB/s wr, 96 op/s Feb 23 04:57:39 localhost podman[314998]: 2026-02-23 09:57:39.363988126 +0000 UTC m=+0.029705949 container kill a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d22eb03f-011a-4956-a118-fceff94fe5e6, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:57:39 localhost dnsmasq[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/addn_hosts - 0 addresses Feb 23 04:57:39 localhost dnsmasq-dhcp[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/host Feb 23 04:57:39 localhost dnsmasq-dhcp[314768]: read /var/lib/neutron/dhcp/d22eb03f-011a-4956-a118-fceff94fe5e6/opts Feb 23 04:57:39 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:39.386 2 INFO neutron.agent.securitygroups_rpc [None req-0a6c1cd1-12af-4dd6-b394-41e125fa511a 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['5db8e770-276e-4b00-beb9-c97310b59e62']#033[00m Feb 23 04:57:40 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:40.081 2 INFO neutron.agent.securitygroups_rpc [None req-bb9f0bb2-0d36-4f02-872c-c348f9c20cb9 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['5db8e770-276e-4b00-beb9-c97310b59e62']#033[00m Feb 23 04:57:40 localhost nova_compute[280321]: 2026-02-23 09:57:40.141 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:40 localhost ovn_controller[155966]: 2026-02-23T09:57:40Z|00222|binding|INFO|Releasing lport d0f1fced-a9c1-4eee-8390-3e659cbea67e from this chassis (sb_readonly=0) Feb 23 04:57:40 localhost ovn_controller[155966]: 2026-02-23T09:57:40Z|00223|binding|INFO|Setting lport d0f1fced-a9c1-4eee-8390-3e659cbea67e down in Southbound Feb 23 04:57:40 localhost kernel: device tapd0f1fced-a9 left promiscuous mode Feb 23 04:57:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:40.151 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-d22eb03f-011a-4956-a118-fceff94fe5e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d22eb03f-011a-4956-a118-fceff94fe5e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cd32cd9-96f1-415e-9cc6-355cd9bc5cdd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d0f1fced-a9c1-4eee-8390-3e659cbea67e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:40.152 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d0f1fced-a9c1-4eee-8390-3e659cbea67e in datapath d22eb03f-011a-4956-a118-fceff94fe5e6 unbound from our chassis#033[00m Feb 23 04:57:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:40.155 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d22eb03f-011a-4956-a118-fceff94fe5e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:57:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:40.156 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf420ed-2c60-4bf8-8a9f-ad28b77e54fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:40 localhost nova_compute[280321]: 2026-02-23 09:57:40.161 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:40 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:40.406 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:57:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:40 localhost sshd[315020]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:57:41 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:41.069 2 INFO neutron.agent.securitygroups_rpc [None req-0a45a65f-615e-41b1-9ed5-950f0c0558fe 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['677a40c3-5537-439d-a2b8-f8dc0d2877b8']#033[00m Feb 23 04:57:41 localhost podman[315039]: 2026-02-23 09:57:41.123653146 +0000 UTC m=+0.059990925 container kill 40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07a36288-c7ac-482a-a7d6-9eab612cc697, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 04:57:41 localhost dnsmasq[314569]: exiting on receipt of SIGTERM Feb 23 04:57:41 localhost systemd[1]: libpod-40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47.scope: Deactivated successfully. Feb 23 04:57:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 795 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 7.1 MiB/s wr, 94 op/s Feb 23 04:57:41 localhost podman[315052]: 2026-02-23 09:57:41.214107601 +0000 UTC m=+0.045565664 container died 40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07a36288-c7ac-482a-a7d6-9eab612cc697, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:57:41 localhost systemd[1]: tmp-crun.nyhIXc.mount: Deactivated successfully. Feb 23 04:57:41 localhost podman[315052]: 2026-02-23 09:57:41.267512143 +0000 UTC m=+0.098970186 container cleanup 40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07a36288-c7ac-482a-a7d6-9eab612cc697, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:57:41 localhost systemd[1]: libpod-conmon-40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47.scope: Deactivated successfully. Feb 23 04:57:41 localhost podman[315054]: 2026-02-23 09:57:41.289536736 +0000 UTC m=+0.146299432 container remove 40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07a36288-c7ac-482a-a7d6-9eab612cc697, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:57:41 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:41.311 263679 INFO neutron.agent.dhcp.agent [None req-7bccbcc0-ad13-4f6b-aeb2-fec2368769df - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:41 localhost dnsmasq[314768]: exiting on receipt of SIGTERM Feb 23 04:57:41 localhost podman[315096]: 2026-02-23 09:57:41.445545546 +0000 UTC m=+0.055309252 container kill a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d22eb03f-011a-4956-a118-fceff94fe5e6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:57:41 localhost systemd[1]: libpod-a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d.scope: Deactivated successfully. Feb 23 04:57:41 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:41.484 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:41 localhost podman[315111]: 2026-02-23 09:57:41.495992548 +0000 UTC m=+0.036003702 container died a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d22eb03f-011a-4956-a118-fceff94fe5e6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:57:41 localhost podman[315111]: 2026-02-23 09:57:41.533499904 +0000 UTC m=+0.073511018 container remove a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d22eb03f-011a-4956-a118-fceff94fe5e6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:57:41 localhost systemd[1]: libpod-conmon-a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d.scope: Deactivated successfully. Feb 23 04:57:41 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:41.656 2 INFO neutron.agent.securitygroups_rpc [None req-e4560e49-35cf-47a1-b71e-2f4b224de7c3 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['677a40c3-5537-439d-a2b8-f8dc0d2877b8']#033[00m Feb 23 04:57:41 localhost nova_compute[280321]: 2026-02-23 09:57:41.873 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:41 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:41.876 263679 INFO neutron.agent.dhcp.agent [None req-36d5f98a-7866-4c7b-980d-748b50e5248c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:42 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e132 e132: 6 total, 6 up, 6 in Feb 23 04:57:42 localhost systemd[1]: var-lib-containers-storage-overlay-5fa94efad642d7af16ba9f62e926fa95e0060f3c44ad5b5dccb11d6a3f387ba5-merged.mount: Deactivated successfully. Feb 23 04:57:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7665b157862131c705869bc1fd27e04749b7d71960f6f384c198b7bb415ec0d-userdata-shm.mount: Deactivated successfully. Feb 23 04:57:42 localhost systemd[1]: run-netns-qdhcp\x2dd22eb03f\x2d011a\x2d4956\x2da118\x2dfceff94fe5e6.mount: Deactivated successfully. Feb 23 04:57:42 localhost systemd[1]: var-lib-containers-storage-overlay-75cde89064522921c7515a3a0e6cb2579c9b26c07b9f7af297067aff08dee4dc-merged.mount: Deactivated successfully. Feb 23 04:57:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40debabc18ee6ec5caba76e05bc3803fd39af1727d2e70e674d0a9817ebd2b47-userdata-shm.mount: Deactivated successfully. Feb 23 04:57:42 localhost systemd[1]: run-netns-qdhcp\x2d07a36288\x2dc7ac\x2d482a\x2da7d6\x2d9eab612cc697.mount: Deactivated successfully. Feb 23 04:57:42 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:42.338 2 INFO neutron.agent.securitygroups_rpc [None req-602c32a6-283b-418b-b086-4a732e23eda9 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['677a40c3-5537-439d-a2b8-f8dc0d2877b8']#033[00m Feb 23 04:57:42 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:42.403 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:57:42 localhost podman[241086]: time="2026-02-23T09:57:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:57:42 localhost podman[241086]: @ - - [23/Feb/2026:09:57:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:57:42 localhost podman[241086]: @ - - [23/Feb/2026:09:57:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17824 "" "Go-http-client/1.1" Feb 23 04:57:42 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:42.777 2 INFO neutron.agent.securitygroups_rpc [None req-dd2fa8de-dc79-4abe-8c24-c96dc8537d15 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['677a40c3-5537-439d-a2b8-f8dc0d2877b8']#033[00m Feb 23 04:57:42 localhost nova_compute[280321]: 2026-02-23 09:57:42.987 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 795 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 6.2 MiB/s wr, 82 op/s Feb 23 04:57:43 localhost nova_compute[280321]: 2026-02-23 09:57:43.637 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:43 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:43.673 2 INFO neutron.agent.securitygroups_rpc [None req-5c824a03-c3ca-4f5e-acd4-18b4108691ee 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['677a40c3-5537-439d-a2b8-f8dc0d2877b8']#033[00m Feb 23 04:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:57:44 localhost podman[315139]: 2026-02-23 09:57:44.010566644 +0000 UTC m=+0.082877654 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0) Feb 23 04:57:44 localhost podman[315139]: 2026-02-23 09:57:44.021770717 +0000 UTC m=+0.094081717 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:57:44 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:57:44 localhost podman[315138]: 2026-02-23 09:57:44.114329386 +0000 UTC m=+0.189600507 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216) Feb 23 04:57:44 localhost podman[315138]: 2026-02-23 09:57:44.121773653 +0000 UTC m=+0.197044754 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Feb 23 04:57:44 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:57:44 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:44.203 2 INFO neutron.agent.securitygroups_rpc [None req-9496869f-6aca-4c6c-85b6-7d3756f27d34 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['677a40c3-5537-439d-a2b8-f8dc0d2877b8']#033[00m Feb 23 04:57:44 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e133 e133: 6 total, 6 up, 6 in Feb 23 04:57:44 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:44.985 2 INFO neutron.agent.securitygroups_rpc [None req-18778c88-b1b9-4b6b-be87-89332e5bf54d 51fa3c9fb33d4d329f38b19fc5b6099c ed50c6ac234342909e3b98d8ad22ae1a - - default default] Security group rule updated ['15676482-e837-4bed-9cab-0aada6b790b9']#033[00m Feb 23 04:57:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 145 MiB data, 795 MiB used, 41 GiB / 42 GiB avail; 3.9 KiB/s rd, 639 B/s wr, 6 op/s Feb 23 04:57:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e134 e134: 6 total, 6 up, 6 in Feb 23 04:57:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:45 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:45.719 2 INFO neutron.agent.securitygroups_rpc [None req-d9005b4c-4b89-4219-884c-0c4473a3114a 0bcd1a517bf5477491d448b5d8ebf7eb ef475d924469485f883dd5a9d719a22d - - default default] Security group rule updated ['3ef9048d-1c37-421d-bb50-73975b08bdfd']#033[00m Feb 23 04:57:47 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:47.123 2 INFO neutron.agent.securitygroups_rpc [None req-16ac4a4c-9b36-487b-bcc5-bd14fe4ab634 b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:57:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 796 MiB used, 41 GiB / 42 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s Feb 23 04:57:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e135 e135: 6 total, 6 up, 6 in Feb 23 04:57:47 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:47.528 2 INFO neutron.agent.securitygroups_rpc [None req-28134d87-6144-4ae7-b9ec-7045d33170e4 4e19ac6dec8e40fbad0c3f681ec14665 6aadd525d3dd402cb701922115d00291 - - default default] Security group member updated ['a015e445-a8f1-4c73-9375-43b03b806b24']#033[00m Feb 23 04:57:47 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:47.822 263679 INFO neutron.agent.linux.ip_lib [None req-f7c95dae-3b5f-4949-8044-44ba22b2373e - - - - - -] Device tap105a6224-50 cannot be used as it has no MAC address#033[00m Feb 23 04:57:47 localhost nova_compute[280321]: 2026-02-23 09:57:47.881 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:47 localhost kernel: device tap105a6224-50 entered promiscuous mode Feb 23 04:57:47 localhost NetworkManager[5987]: [1771840667.8903] manager: (tap105a6224-50): new Generic device (/org/freedesktop/NetworkManager/Devices/44) Feb 23 04:57:47 localhost ovn_controller[155966]: 2026-02-23T09:57:47Z|00224|binding|INFO|Claiming lport 105a6224-50e5-41ab-b7fa-43bf3e4e9e34 for this chassis. Feb 23 04:57:47 localhost ovn_controller[155966]: 2026-02-23T09:57:47Z|00225|binding|INFO|105a6224-50e5-41ab-b7fa-43bf3e4e9e34: Claiming unknown Feb 23 04:57:47 localhost nova_compute[280321]: 2026-02-23 09:57:47.893 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:47 localhost systemd-udevd[315185]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:57:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:47.903 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-44f3daac-e320-4509-ada7-e5bee4e63e34', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44f3daac-e320-4509-ada7-e5bee4e63e34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8635084f010e445d861ab634b753fa27', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7e9201-dc5e-4a76-b2c0-58b555faf24a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=105a6224-50e5-41ab-b7fa-43bf3e4e9e34) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:47.906 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 105a6224-50e5-41ab-b7fa-43bf3e4e9e34 in datapath 44f3daac-e320-4509-ada7-e5bee4e63e34 bound to our chassis#033[00m Feb 23 04:57:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:47.908 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 44f3daac-e320-4509-ada7-e5bee4e63e34 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:57:47 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:47.912 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[bc77cdca-ee8b-42ed-b84a-91fb8248fc2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:47 localhost journal[229268]: ethtool ioctl error on tap105a6224-50: No such device Feb 23 04:57:47 localhost nova_compute[280321]: 2026-02-23 09:57:47.928 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:47 localhost journal[229268]: ethtool ioctl error on tap105a6224-50: No such device Feb 23 04:57:47 localhost ovn_controller[155966]: 2026-02-23T09:57:47Z|00226|binding|INFO|Setting lport 105a6224-50e5-41ab-b7fa-43bf3e4e9e34 ovn-installed in OVS Feb 23 04:57:47 localhost ovn_controller[155966]: 2026-02-23T09:57:47Z|00227|binding|INFO|Setting lport 105a6224-50e5-41ab-b7fa-43bf3e4e9e34 up in Southbound Feb 23 04:57:47 localhost nova_compute[280321]: 2026-02-23 09:57:47.933 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:47 localhost journal[229268]: ethtool ioctl error on tap105a6224-50: No such device Feb 23 04:57:47 localhost journal[229268]: ethtool ioctl error on tap105a6224-50: No such device Feb 23 04:57:47 localhost journal[229268]: ethtool ioctl error on tap105a6224-50: No such device Feb 23 04:57:47 localhost journal[229268]: ethtool ioctl error on tap105a6224-50: No such device Feb 23 04:57:47 localhost journal[229268]: ethtool ioctl error on tap105a6224-50: No such device Feb 23 04:57:47 localhost journal[229268]: ethtool ioctl error on tap105a6224-50: No such device Feb 23 04:57:47 localhost nova_compute[280321]: 2026-02-23 09:57:47.965 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:47 localhost nova_compute[280321]: 2026-02-23 09:57:47.990 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:48 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:48.300 2 INFO neutron.agent.securitygroups_rpc [None req-16297440-fc3c-47fa-be29-1571a923d41c b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:57:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:48.313 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:57:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:48.314 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:57:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:48.314 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:57:48 localhost nova_compute[280321]: 2026-02-23 09:57:48.640 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:48 localhost podman[315256]: Feb 23 04:57:48 localhost podman[315256]: 2026-02-23 09:57:48.806800048 +0000 UTC m=+0.086404053 container create 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:57:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:57:48 localhost systemd[1]: Started libpod-conmon-2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe.scope. Feb 23 04:57:48 localhost systemd[1]: Started libcrun container. Feb 23 04:57:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d9760efb9ca272e8a8e0b660de387af53648624fe1cbfc672570f210c6b7f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:57:48 localhost podman[315256]: 2026-02-23 09:57:48.763342959 +0000 UTC m=+0.042946974 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:57:48 localhost podman[315256]: 2026-02-23 09:57:48.869534225 +0000 UTC m=+0.149138230 container init 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216) Feb 23 04:57:48 localhost podman[315256]: 2026-02-23 09:57:48.878358985 +0000 UTC m=+0.157962990 container start 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:57:48 localhost dnsmasq[315284]: started, version 2.85 cachesize 150 Feb 23 04:57:48 localhost dnsmasq[315284]: DNS service limited to local subnets Feb 23 04:57:48 localhost dnsmasq[315284]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:57:48 localhost dnsmasq[315284]: warning: no upstream servers configured Feb 23 04:57:48 localhost dnsmasq-dhcp[315284]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 04:57:48 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 0 addresses Feb 23 04:57:48 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:57:48 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:57:48 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:48.940 263679 INFO neutron.agent.dhcp.agent [None req-f7c95dae-3b5f-4949-8044-44ba22b2373e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:46Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=58238991-ecea-4894-9a24-08521e0ad94d, ip_allocation=immediate, mac_address=fa:16:3e:41:e8:ac, name=tempest-AllowedAddressPairIpV6TestJSON-466670975, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:45Z, description=, dns_domain=, id=44f3daac-e320-4509-ada7-e5bee4e63e34, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1254240118, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1960, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1816, status=ACTIVE, subnets=['accb7b62-096e-49d7-8969-d6733ec4800c'], tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:46Z, vlan_transparent=None, network_id=44f3daac-e320-4509-ada7-e5bee4e63e34, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['80d4c661-e254-4356-81d1-bc4c19a37e6b'], standard_attr_id=1839, status=DOWN, tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:46Z on network 44f3daac-e320-4509-ada7-e5bee4e63e34#033[00m Feb 23 04:57:48 localhost podman[315270]: 2026-02-23 09:57:48.984230671 +0000 UTC m=+0.143727884 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:57:48 localhost podman[315270]: 2026-02-23 09:57:48.994815845 +0000 UTC m=+0.154313068 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 04:57:49 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:57:49 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e136 e136: 6 total, 6 up, 6 in Feb 23 04:57:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:49.093 263679 INFO neutron.agent.dhcp.agent [None req-e75e5873-c500-4952-9ca7-90fab74e4101 - - - - - -] DHCP configuration for ports {'b9bb7068-75bd-4781-870d-2e88ecb30767'} is completed#033[00m Feb 23 04:57:49 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 1 addresses Feb 23 04:57:49 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:57:49 localhost podman[315316]: 2026-02-23 09:57:49.123674634 +0000 UTC m=+0.075707465 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS) Feb 23 04:57:49 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:57:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 796 MiB used, 41 GiB / 42 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s Feb 23 04:57:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:49.316 263679 INFO neutron.agent.linux.ip_lib [None req-8def3451-995d-4e7c-a20c-ada990ae9693 - - - - - -] Device tap960f1cc8-5c cannot be used as it has no MAC address#033[00m Feb 23 04:57:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:49.346 263679 INFO neutron.agent.dhcp.agent [None req-f7c95dae-3b5f-4949-8044-44ba22b2373e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:47Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1115ad70-c2c3-4206-a119-9652fa9e692f, ip_allocation=immediate, mac_address=fa:16:3e:d2:d7:73, name=tempest-AllowedAddressPairIpV6TestJSON-98392976, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:45Z, description=, dns_domain=, id=44f3daac-e320-4509-ada7-e5bee4e63e34, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1254240118, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1960, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1816, status=ACTIVE, subnets=['accb7b62-096e-49d7-8969-d6733ec4800c'], tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:46Z, vlan_transparent=None, network_id=44f3daac-e320-4509-ada7-e5bee4e63e34, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['80d4c661-e254-4356-81d1-bc4c19a37e6b'], standard_attr_id=1853, status=DOWN, tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:47Z on network 44f3daac-e320-4509-ada7-e5bee4e63e34#033[00m Feb 23 04:57:49 localhost nova_compute[280321]: 2026-02-23 09:57:49.401 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:49 localhost kernel: device tap960f1cc8-5c entered promiscuous mode Feb 23 04:57:49 localhost NetworkManager[5987]: [1771840669.4107] manager: (tap960f1cc8-5c): new Generic device (/org/freedesktop/NetworkManager/Devices/45) Feb 23 04:57:49 localhost nova_compute[280321]: 2026-02-23 09:57:49.411 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:49 localhost ovn_controller[155966]: 2026-02-23T09:57:49Z|00228|binding|INFO|Claiming lport 960f1cc8-5c92-445c-88f5-38ba7e3b1294 for this chassis. Feb 23 04:57:49 localhost ovn_controller[155966]: 2026-02-23T09:57:49Z|00229|binding|INFO|960f1cc8-5c92-445c-88f5-38ba7e3b1294: Claiming unknown Feb 23 04:57:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:49.429 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-7d73d663-9a0c-40f3-ba95-35aaf6a12f72', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d73d663-9a0c-40f3-ba95-35aaf6a12f72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49657a6049f341f9846a611e4d0c4e67', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3aeab7cf-2cec-4945-bd07-79b343ff5dfb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=960f1cc8-5c92-445c-88f5-38ba7e3b1294) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:49.431 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 960f1cc8-5c92-445c-88f5-38ba7e3b1294 in datapath 7d73d663-9a0c-40f3-ba95-35aaf6a12f72 bound to our chassis#033[00m Feb 23 04:57:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:49.432 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port ced2eddc-9ca6-4a23-ab10-06b13e9cffc9 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:57:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:49.433 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d73d663-9a0c-40f3-ba95-35aaf6a12f72, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:57:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:49.434 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[991ed9e6-f33f-4b09-a442-ebb67a22b0c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:49 localhost journal[229268]: ethtool ioctl error on tap960f1cc8-5c: No such device Feb 23 04:57:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:49.450 263679 INFO neutron.agent.dhcp.agent [None req-bb4f61c2-3d08-4e79-8ab2-ec968f083100 - - - - - -] DHCP configuration for ports {'58238991-ecea-4894-9a24-08521e0ad94d'} is completed#033[00m Feb 23 04:57:49 localhost ovn_controller[155966]: 2026-02-23T09:57:49Z|00230|binding|INFO|Setting lport 960f1cc8-5c92-445c-88f5-38ba7e3b1294 ovn-installed in OVS Feb 23 04:57:49 localhost ovn_controller[155966]: 2026-02-23T09:57:49Z|00231|binding|INFO|Setting lport 960f1cc8-5c92-445c-88f5-38ba7e3b1294 up in Southbound Feb 23 04:57:49 localhost nova_compute[280321]: 2026-02-23 09:57:49.452 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:49 localhost journal[229268]: ethtool ioctl error on tap960f1cc8-5c: No such device Feb 23 04:57:49 localhost journal[229268]: ethtool ioctl error on tap960f1cc8-5c: No such device Feb 23 04:57:49 localhost journal[229268]: ethtool ioctl error on tap960f1cc8-5c: No such device Feb 23 04:57:49 localhost journal[229268]: ethtool ioctl error on tap960f1cc8-5c: No such device Feb 23 04:57:49 localhost journal[229268]: ethtool ioctl error on tap960f1cc8-5c: No such device Feb 23 04:57:49 localhost journal[229268]: ethtool ioctl error on tap960f1cc8-5c: No such device Feb 23 04:57:49 localhost journal[229268]: ethtool ioctl error on tap960f1cc8-5c: No such device Feb 23 04:57:49 localhost nova_compute[280321]: 2026-02-23 09:57:49.503 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:49 localhost nova_compute[280321]: 2026-02-23 09:57:49.539 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:49 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:49.586 2 INFO neutron.agent.securitygroups_rpc [None req-35663660-8b26-425d-b466-39f75aa64f72 b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:57:49 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 2 addresses Feb 23 04:57:49 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:57:49 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:57:49 localhost podman[315389]: 2026-02-23 09:57:49.599638804 +0000 UTC m=+0.066400031 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 23 04:57:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:49.868 263679 INFO neutron.agent.dhcp.agent [None req-14091b5b-fd47-4c85-a798-fbef4329f5c1 - - - - - -] DHCP configuration for ports {'1115ad70-c2c3-4206-a119-9652fa9e692f'} is completed#033[00m Feb 23 04:57:50 localhost podman[315441]: 2026-02-23 09:57:50.022683115 +0000 UTC m=+0.065519304 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:57:50 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 1 addresses Feb 23 04:57:50 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:57:50 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:57:50 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:57:50 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2303 writes, 23K keys, 2303 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.07 MB/s#012Cumulative WAL: 2303 writes, 2303 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2303 writes, 23K keys, 2303 commit groups, 1.0 writes per commit group, ingest: 41.93 MB, 0.07 MB/s#012Interval WAL: 2303 writes, 2303 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 218.5 0.14 0.07 9 0.016 0 0 0.0 0.0#012 L6 1/0 16.03 MB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 4.4 237.9 214.6 0.63 0.36 8 0.079 99K 3993 0.0 0.0#012 Sum 1/0 16.03 MB 0.0 0.1 0.0 0.1 0.2 0.0 0.0 5.4 194.7 215.3 0.78 0.43 17 0.046 99K 3993 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.2 0.0 0.0 5.4 195.3 216.1 0.77 0.43 16 0.048 99K 3993 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 0.0 237.9 214.6 0.63 0.36 8 0.079 99K 3993 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 222.6 0.14 0.07 8 0.017 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.030#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.28 MB/s write, 0.15 GB read, 0.25 MB/s read, 0.8 seconds#012Interval compaction: 0.16 GB write, 0.28 MB/s write, 0.15 GB read, 0.25 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564eb1551350#2 capacity: 308.00 MB usage: 15.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000179 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(740,14.97 MB,4.86131%) FilterBlock(17,302.23 KB,0.0958282%) IndexBlock(17,396.73 KB,0.125791%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Feb 23 04:57:50 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:50.373 2 INFO neutron.agent.securitygroups_rpc [None req-2cc4babb-57fc-4ffe-921b-a7d431fda5c5 b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:57:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:50.413 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:50Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9a9900c5-2a06-4e79-9b74-1378b2613df3, ip_allocation=immediate, mac_address=fa:16:3e:87:87:d0, name=tempest-AllowedAddressPairIpV6TestJSON-1645035071, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:45Z, description=, dns_domain=, id=44f3daac-e320-4509-ada7-e5bee4e63e34, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1254240118, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1960, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1816, status=ACTIVE, subnets=['accb7b62-096e-49d7-8969-d6733ec4800c'], tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:46Z, vlan_transparent=None, network_id=44f3daac-e320-4509-ada7-e5bee4e63e34, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['80d4c661-e254-4356-81d1-bc4c19a37e6b'], standard_attr_id=1874, status=DOWN, tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:50Z on network 44f3daac-e320-4509-ada7-e5bee4e63e34#033[00m Feb 23 04:57:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:50 localhost podman[315509]: Feb 23 04:57:50 localhost podman[315509]: 2026-02-23 09:57:50.577796994 +0000 UTC m=+0.069809505 container create 196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d73d663-9a0c-40f3-ba95-35aaf6a12f72, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:57:50 localhost systemd[1]: Started libpod-conmon-196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc.scope. Feb 23 04:57:50 localhost systemd[1]: Started libcrun container. Feb 23 04:57:50 localhost podman[315509]: 2026-02-23 09:57:50.543486196 +0000 UTC m=+0.035498717 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:57:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ade5b29bf3aecb7305aa3b8751dbeb62f3ffb237572ee8927ce1dc2c83b526/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:57:50 localhost podman[315509]: 2026-02-23 09:57:50.657584553 +0000 UTC m=+0.149597074 container init 196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d73d663-9a0c-40f3-ba95-35aaf6a12f72, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 04:57:50 localhost podman[315522]: 2026-02-23 09:57:50.659296435 +0000 UTC m=+0.120218646 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:57:50 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 2 addresses Feb 23 04:57:50 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:57:50 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:57:50 localhost podman[315509]: 2026-02-23 09:57:50.666163375 +0000 UTC m=+0.158175886 container start 196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d73d663-9a0c-40f3-ba95-35aaf6a12f72, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:57:50 localhost dnsmasq[315542]: started, version 2.85 cachesize 150 Feb 23 04:57:50 localhost dnsmasq[315542]: DNS service limited to local subnets Feb 23 04:57:50 localhost dnsmasq[315542]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:57:50 localhost dnsmasq[315542]: warning: no upstream servers configured Feb 23 04:57:50 localhost dnsmasq-dhcp[315542]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:57:50 localhost dnsmasq[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/addn_hosts - 0 addresses Feb 23 04:57:50 localhost dnsmasq-dhcp[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/host Feb 23 04:57:50 localhost dnsmasq-dhcp[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/opts Feb 23 04:57:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:50.847 263679 INFO neutron.agent.dhcp.agent [None req-4a9f677b-1464-41bc-ac14-ccdc1b4f304d - - - - - -] DHCP configuration for ports {'d33aab39-7518-499a-b4cd-d8614a6c5068'} is completed#033[00m Feb 23 04:57:51 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:51.015 263679 INFO neutron.agent.dhcp.agent [None req-ce835412-c7f7-4cf7-88ca-9d6ba3e949cc - - - - - -] DHCP configuration for ports {'9a9900c5-2a06-4e79-9b74-1378b2613df3'} is completed#033[00m Feb 23 04:57:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 145 MiB data, 796 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 56 op/s Feb 23 04:57:51 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:51.260 2 INFO neutron.agent.securitygroups_rpc [None req-a96e8b20-0ccf-41bb-a529-34f8cbad2842 b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:57:51 localhost systemd[1]: tmp-crun.uCy5PZ.mount: Deactivated successfully. Feb 23 04:57:51 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 1 addresses Feb 23 04:57:51 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:57:51 localhost podman[315569]: 2026-02-23 09:57:51.502242843 +0000 UTC m=+0.075357004 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS) Feb 23 04:57:51 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:57:52 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e137 e137: 6 total, 6 up, 6 in Feb 23 04:57:52 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:52.713 2 INFO neutron.agent.securitygroups_rpc [None req-6e03d4f3-0be2-400e-8da5-d3c25ea65d96 b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:57:52 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:52.825 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:51Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d89f67bd-6e21-4187-a45c-2c82514ed8ff, ip_allocation=immediate, mac_address=fa:16:3e:e4:a8:53, name=tempest-AllowedAddressPairIpV6TestJSON-210655235, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:45Z, description=, dns_domain=, id=44f3daac-e320-4509-ada7-e5bee4e63e34, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1254240118, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1960, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1816, status=ACTIVE, subnets=['accb7b62-096e-49d7-8969-d6733ec4800c'], tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:46Z, vlan_transparent=None, network_id=44f3daac-e320-4509-ada7-e5bee4e63e34, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['80d4c661-e254-4356-81d1-bc4c19a37e6b'], standard_attr_id=1881, status=DOWN, tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:52Z on network 44f3daac-e320-4509-ada7-e5bee4e63e34#033[00m Feb 23 04:57:52 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:52.865 2 INFO neutron.agent.securitygroups_rpc [None req-36c16d45-719d-4f5a-b525-80c8ec71acee 4e19ac6dec8e40fbad0c3f681ec14665 6aadd525d3dd402cb701922115d00291 - - default default] Security group member updated ['a015e445-a8f1-4c73-9375-43b03b806b24']#033[00m Feb 23 04:57:53 localhost systemd[1]: tmp-crun.COtUdn.mount: Deactivated successfully. Feb 23 04:57:53 localhost nova_compute[280321]: 2026-02-23 09:57:53.038 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:53 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 2 addresses Feb 23 04:57:53 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:57:53 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:57:53 localhost podman[315607]: 2026-02-23 09:57:53.038294537 +0000 UTC m=+0.068553476 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:57:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 145 MiB data, 796 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 2.2 KiB/s wr, 57 op/s Feb 23 04:57:53 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:53.215 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:52Z, description=, device_id=c9f3a506-9750-4536-9974-30a3c1ad4369, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fe3e8375-9048-4c6d-b447-187580d40a80, ip_allocation=immediate, mac_address=fa:16:3e:03:e9:32, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:46Z, description=, dns_domain=, id=7d73d663-9a0c-40f3-ba95-35aaf6a12f72, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-1660494759-network, port_security_enabled=True, project_id=49657a6049f341f9846a611e4d0c4e67, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6418, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1833, status=ACTIVE, subnets=['8e52c90c-b56d-4031-b17d-fc3bd5b443df'], tags=[], tenant_id=49657a6049f341f9846a611e4d0c4e67, updated_at=2026-02-23T09:57:47Z, vlan_transparent=None, network_id=7d73d663-9a0c-40f3-ba95-35aaf6a12f72, port_security_enabled=False, project_id=49657a6049f341f9846a611e4d0c4e67, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1890, status=DOWN, tags=[], tenant_id=49657a6049f341f9846a611e4d0c4e67, updated_at=2026-02-23T09:57:52Z on network 7d73d663-9a0c-40f3-ba95-35aaf6a12f72#033[00m Feb 23 04:57:53 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:53.387 263679 INFO neutron.agent.dhcp.agent [None req-4baaa61c-b40b-440e-9684-37879ebedada - - - - - -] DHCP configuration for ports {'d89f67bd-6e21-4187-a45c-2c82514ed8ff'} is completed#033[00m Feb 23 04:57:53 localhost podman[315644]: 2026-02-23 09:57:53.464538227 +0000 UTC m=+0.054510347 container kill 196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d73d663-9a0c-40f3-ba95-35aaf6a12f72, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216) Feb 23 04:57:53 localhost dnsmasq[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/addn_hosts - 1 addresses Feb 23 04:57:53 localhost dnsmasq-dhcp[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/host Feb 23 04:57:53 localhost dnsmasq-dhcp[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/opts Feb 23 04:57:53 localhost nova_compute[280321]: 2026-02-23 09:57:53.643 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:54 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:54.022 263679 INFO neutron.agent.dhcp.agent [None req-89156c96-4ac5-4115-a54c-88d9f0b66152 - - - - - -] DHCP configuration for ports {'fe3e8375-9048-4c6d-b447-187580d40a80'} is completed#033[00m Feb 23 04:57:54 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:54.356 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:52Z, description=, device_id=c9f3a506-9750-4536-9974-30a3c1ad4369, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fe3e8375-9048-4c6d-b447-187580d40a80, ip_allocation=immediate, mac_address=fa:16:3e:03:e9:32, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:46Z, description=, dns_domain=, id=7d73d663-9a0c-40f3-ba95-35aaf6a12f72, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-1660494759-network, port_security_enabled=True, project_id=49657a6049f341f9846a611e4d0c4e67, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6418, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1833, status=ACTIVE, subnets=['8e52c90c-b56d-4031-b17d-fc3bd5b443df'], tags=[], tenant_id=49657a6049f341f9846a611e4d0c4e67, updated_at=2026-02-23T09:57:47Z, vlan_transparent=None, network_id=7d73d663-9a0c-40f3-ba95-35aaf6a12f72, port_security_enabled=False, project_id=49657a6049f341f9846a611e4d0c4e67, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1890, status=DOWN, tags=[], tenant_id=49657a6049f341f9846a611e4d0c4e67, updated_at=2026-02-23T09:57:52Z on network 7d73d663-9a0c-40f3-ba95-35aaf6a12f72#033[00m Feb 23 04:57:54 localhost dnsmasq[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/addn_hosts - 1 addresses Feb 23 04:57:54 localhost dnsmasq-dhcp[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/host Feb 23 04:57:54 localhost podman[315684]: 2026-02-23 09:57:54.587863725 +0000 UTC m=+0.060152129 container kill 196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d73d663-9a0c-40f3-ba95-35aaf6a12f72, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 04:57:54 localhost dnsmasq-dhcp[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/opts Feb 23 04:57:54 localhost sshd[315698]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:57:54 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:54.883 263679 INFO neutron.agent.dhcp.agent [None req-f48ce9d8-c787-471c-9b4e-518dec8d4662 - - - - - -] DHCP configuration for ports {'fe3e8375-9048-4c6d-b447-187580d40a80'} is completed#033[00m Feb 23 04:57:55 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:55.155 2 INFO neutron.agent.securitygroups_rpc [None req-518b7324-91f0-437a-87bd-ce34c1a3be1e e22ee96829d64023b04af5ccfdd0ab53 a3622447b13c4164ad418e851634e3b3 - - default default] Security group member updated ['b06f7d0b-a9fc-4c26-994a-bc68e12c2cf6']#033[00m Feb 23 04:57:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 145 MiB data, 796 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 44 op/s Feb 23 04:57:55 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:55.220 2 INFO neutron.agent.securitygroups_rpc [None req-1ba87428-a084-4aaf-9c26-da4479389d22 b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:57:55 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Feb 23 04:57:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:57:55 localhost ovn_controller[155966]: 2026-02-23T09:57:55Z|00232|ovn_bfd|INFO|Enabled BFD on interface ovn-5b0126-0 Feb 23 04:57:55 localhost ovn_controller[155966]: 2026-02-23T09:57:55Z|00233|ovn_bfd|INFO|Enabled BFD on interface ovn-585d62-0 Feb 23 04:57:55 localhost ovn_controller[155966]: 2026-02-23T09:57:55Z|00234|ovn_bfd|INFO|Enabled BFD on interface ovn-b9c72d-0 Feb 23 04:57:55 localhost nova_compute[280321]: 2026-02-23 09:57:55.445 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:55 localhost nova_compute[280321]: 2026-02-23 09:57:55.508 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:55 localhost nova_compute[280321]: 2026-02-23 09:57:55.527 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:55 localhost nova_compute[280321]: 2026-02-23 09:57:55.559 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:55 localhost systemd[1]: tmp-crun.IQroZf.mount: Deactivated successfully. Feb 23 04:57:55 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 1 addresses Feb 23 04:57:55 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:57:55 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:57:55 localhost podman[315727]: 2026-02-23 09:57:55.572654309 +0000 UTC m=+0.146456277 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:57:56 localhost nova_compute[280321]: 2026-02-23 09:57:56.380 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:56 localhost nova_compute[280321]: 2026-02-23 09:57:56.527 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:56 localhost nova_compute[280321]: 2026-02-23 09:57:56.584 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:57 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:57.023 2 INFO neutron.agent.securitygroups_rpc [None req-758250a6-9620-43e4-8e85-56ce0e87f261 b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:57:57 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:57.140 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:56Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ddc7145e-384d-486b-a72c-bf0703372f2c, ip_allocation=immediate, mac_address=fa:16:3e:53:73:bc, name=tempest-AllowedAddressPairIpV6TestJSON-1238894461, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:45Z, description=, dns_domain=, id=44f3daac-e320-4509-ada7-e5bee4e63e34, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1254240118, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1960, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1816, status=ACTIVE, subnets=['accb7b62-096e-49d7-8969-d6733ec4800c'], tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:46Z, vlan_transparent=None, network_id=44f3daac-e320-4509-ada7-e5bee4e63e34, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['80d4c661-e254-4356-81d1-bc4c19a37e6b'], standard_attr_id=1907, status=DOWN, tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:56Z on network 44f3daac-e320-4509-ada7-e5bee4e63e34#033[00m Feb 23 04:57:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 43 op/s Feb 23 04:57:57 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 2 addresses Feb 23 04:57:57 localhost podman[315768]: 2026-02-23 09:57:57.302923451 +0000 UTC m=+0.042104869 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 04:57:57 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:57:57 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:57:57 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:57.545 263679 INFO neutron.agent.dhcp.agent [None req-22d3e59a-6271-435b-b606-b270a9a05a51 - - - - - -] DHCP configuration for ports {'ddc7145e-384d-486b-a72c-bf0703372f2c'} is completed#033[00m Feb 23 04:57:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e138 e138: 6 total, 6 up, 6 in Feb 23 04:57:58 localhost nova_compute[280321]: 2026-02-23 09:57:58.068 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:58 localhost ovn_controller[155966]: 2026-02-23T09:57:58Z|00235|ovn_bfd|INFO|Disabled BFD on interface ovn-5b0126-0 Feb 23 04:57:58 localhost ovn_controller[155966]: 2026-02-23T09:57:58Z|00236|ovn_bfd|INFO|Disabled BFD on interface ovn-585d62-0 Feb 23 04:57:58 localhost ovn_controller[155966]: 2026-02-23T09:57:58Z|00237|ovn_bfd|INFO|Disabled BFD on interface ovn-b9c72d-0 Feb 23 04:57:58 localhost nova_compute[280321]: 2026-02-23 09:57:58.114 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:58 localhost nova_compute[280321]: 2026-02-23 09:57:58.123 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:58 localhost nova_compute[280321]: 2026-02-23 09:57:58.139 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:58 localhost dnsmasq[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/addn_hosts - 0 addresses Feb 23 04:57:58 localhost dnsmasq-dhcp[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/host Feb 23 04:57:58 localhost dnsmasq-dhcp[315542]: read /var/lib/neutron/dhcp/7d73d663-9a0c-40f3-ba95-35aaf6a12f72/opts Feb 23 04:57:58 localhost podman[315807]: 2026-02-23 09:57:58.222272934 +0000 UTC m=+0.061525011 container kill 196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d73d663-9a0c-40f3-ba95-35aaf6a12f72, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 04:57:58 localhost ovn_controller[155966]: 2026-02-23T09:57:58Z|00238|binding|INFO|Releasing lport 960f1cc8-5c92-445c-88f5-38ba7e3b1294 from this chassis (sb_readonly=0) Feb 23 04:57:58 localhost ovn_controller[155966]: 2026-02-23T09:57:58Z|00239|binding|INFO|Setting lport 960f1cc8-5c92-445c-88f5-38ba7e3b1294 down in Southbound Feb 23 04:57:58 localhost nova_compute[280321]: 2026-02-23 09:57:58.440 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:58 localhost kernel: device tap960f1cc8-5c left promiscuous mode Feb 23 04:57:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:58.448 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-7d73d663-9a0c-40f3-ba95-35aaf6a12f72', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d73d663-9a0c-40f3-ba95-35aaf6a12f72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49657a6049f341f9846a611e4d0c4e67', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3aeab7cf-2cec-4945-bd07-79b343ff5dfb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=960f1cc8-5c92-445c-88f5-38ba7e3b1294) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:57:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:58.451 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 960f1cc8-5c92-445c-88f5-38ba7e3b1294 in datapath 7d73d663-9a0c-40f3-ba95-35aaf6a12f72 unbound from our chassis#033[00m Feb 23 04:57:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:58.453 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d73d663-9a0c-40f3-ba95-35aaf6a12f72, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:57:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:57:58.455 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d928e3f0-bcb9-41ac-b4d7-e72975f74ae0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:57:58 localhost nova_compute[280321]: 2026-02-23 09:57:58.474 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:58 localhost nova_compute[280321]: 2026-02-23 09:57:58.646 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:57:58 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:58.663 2 INFO neutron.agent.securitygroups_rpc [None req-90d7272c-2a16-4c66-834f-ae9e72b06d5d b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:57:58 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:58.723 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:57:57Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8b3ce308-6036-4011-9f81-9e888291b3a2, ip_allocation=immediate, mac_address=fa:16:3e:54:49:a3, name=tempest-AllowedAddressPairIpV6TestJSON-799962976, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:45Z, description=, dns_domain=, id=44f3daac-e320-4509-ada7-e5bee4e63e34, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1254240118, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1960, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1816, status=ACTIVE, subnets=['accb7b62-096e-49d7-8969-d6733ec4800c'], tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:46Z, vlan_transparent=None, network_id=44f3daac-e320-4509-ada7-e5bee4e63e34, port_security_enabled=True, project_id=8635084f010e445d861ab634b753fa27, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['80d4c661-e254-4356-81d1-bc4c19a37e6b'], standard_attr_id=1915, status=DOWN, tags=[], tenant_id=8635084f010e445d861ab634b753fa27, updated_at=2026-02-23T09:57:57Z on network 44f3daac-e320-4509-ada7-e5bee4e63e34#033[00m Feb 23 04:57:58 localhost neutron_sriov_agent[256355]: 2026-02-23 09:57:58.724 2 INFO neutron.agent.securitygroups_rpc [None req-92d71163-849e-49a8-9e78-04255fc35661 e22ee96829d64023b04af5ccfdd0ab53 a3622447b13c4164ad418e851634e3b3 - - default default] Security group member updated ['b06f7d0b-a9fc-4c26-994a-bc68e12c2cf6']#033[00m Feb 23 04:57:58 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 3 addresses Feb 23 04:57:58 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:57:58 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:57:58 localhost podman[315846]: 2026-02-23 09:57:58.924747017 +0000 UTC m=+0.061659866 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:57:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 3 op/s Feb 23 04:57:59 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:57:59.208 263679 INFO neutron.agent.dhcp.agent [None req-08b039ed-193c-44ed-9524-80398b0b9b5a - - - - - -] DHCP configuration for ports {'8b3ce308-6036-4011-9f81-9e888291b3a2'} is completed#033[00m Feb 23 04:58:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:00.200 263679 INFO neutron.agent.linux.ip_lib [None req-986ce3c6-0edd-4ccb-9753-fd3f469f4a67 - - - - - -] Device tap14306afc-e1 cannot be used as it has no MAC address#033[00m Feb 23 04:58:00 localhost nova_compute[280321]: 2026-02-23 09:58:00.225 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:00 localhost kernel: device tap14306afc-e1 entered promiscuous mode Feb 23 04:58:00 localhost NetworkManager[5987]: [1771840680.2342] manager: (tap14306afc-e1): new Generic device (/org/freedesktop/NetworkManager/Devices/46) Feb 23 04:58:00 localhost ovn_controller[155966]: 2026-02-23T09:58:00Z|00240|binding|INFO|Claiming lport 14306afc-e15b-42d0-b398-ee5dd9ebaeb0 for this chassis. Feb 23 04:58:00 localhost ovn_controller[155966]: 2026-02-23T09:58:00Z|00241|binding|INFO|14306afc-e15b-42d0-b398-ee5dd9ebaeb0: Claiming unknown Feb 23 04:58:00 localhost nova_compute[280321]: 2026-02-23 09:58:00.236 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:00 localhost systemd-udevd[315877]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:58:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:00.252 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e09301e8-3acf-46e4-a651-d747e131d2ae', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e09301e8-3acf-46e4-a651-d747e131d2ae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5171e39-61d3-464d-9996-dd01dabd2303, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=14306afc-e15b-42d0-b398-ee5dd9ebaeb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:00.257 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 14306afc-e15b-42d0-b398-ee5dd9ebaeb0 in datapath e09301e8-3acf-46e4-a651-d747e131d2ae bound to our chassis#033[00m Feb 23 04:58:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:00.260 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e09301e8-3acf-46e4-a651-d747e131d2ae or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:58:00 localhost journal[229268]: ethtool ioctl error on tap14306afc-e1: No such device Feb 23 04:58:00 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:00.261 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[e15c3ec2-c09f-4f49-a146-5bc25b4e859b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:00 localhost journal[229268]: ethtool ioctl error on tap14306afc-e1: No such device Feb 23 04:58:00 localhost ovn_controller[155966]: 2026-02-23T09:58:00Z|00242|binding|INFO|Setting lport 14306afc-e15b-42d0-b398-ee5dd9ebaeb0 ovn-installed in OVS Feb 23 04:58:00 localhost ovn_controller[155966]: 2026-02-23T09:58:00Z|00243|binding|INFO|Setting lport 14306afc-e15b-42d0-b398-ee5dd9ebaeb0 up in Southbound Feb 23 04:58:00 localhost nova_compute[280321]: 2026-02-23 09:58:00.270 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:00 localhost journal[229268]: ethtool ioctl error on tap14306afc-e1: No such device Feb 23 04:58:00 localhost journal[229268]: ethtool ioctl error on tap14306afc-e1: No such device Feb 23 04:58:00 localhost journal[229268]: ethtool ioctl error on tap14306afc-e1: No such device Feb 23 04:58:00 localhost journal[229268]: ethtool ioctl error on tap14306afc-e1: No such device Feb 23 04:58:00 localhost journal[229268]: ethtool ioctl error on tap14306afc-e1: No such device Feb 23 04:58:00 localhost journal[229268]: ethtool ioctl error on tap14306afc-e1: No such device Feb 23 04:58:00 localhost nova_compute[280321]: 2026-02-23 09:58:00.316 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:00 localhost nova_compute[280321]: 2026-02-23 09:58:00.344 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 35 op/s Feb 23 04:58:01 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:01.244 2 INFO neutron.agent.securitygroups_rpc [None req-28ec64b1-e1b5-4dd6-940d-987b4dc3aa1e b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:58:01 localhost podman[315948]: Feb 23 04:58:01 localhost podman[315948]: 2026-02-23 09:58:01.325200976 +0000 UTC m=+0.137369040 container create e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e09301e8-3acf-46e4-a651-d747e131d2ae, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0) Feb 23 04:58:01 localhost systemd[1]: Started libpod-conmon-e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346.scope. Feb 23 04:58:01 localhost podman[315948]: 2026-02-23 09:58:01.28019029 +0000 UTC m=+0.092358404 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:58:01 localhost systemd[1]: tmp-crun.Xg8R6A.mount: Deactivated successfully. Feb 23 04:58:01 localhost systemd[1]: Started libcrun container. Feb 23 04:58:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edc4cfad855da037dbfc9f6a35a0a5ad46961bff067b5efd637be9f4e8e704c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:58:01 localhost podman[315948]: 2026-02-23 09:58:01.446920127 +0000 UTC m=+0.259088181 container init e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e09301e8-3acf-46e4-a651-d747e131d2ae, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 23 04:58:01 localhost podman[315948]: 2026-02-23 09:58:01.456617543 +0000 UTC m=+0.268785617 container start e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e09301e8-3acf-46e4-a651-d747e131d2ae, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 23 04:58:01 localhost dnsmasq[315995]: started, version 2.85 cachesize 150 Feb 23 04:58:01 localhost dnsmasq[315995]: DNS service limited to local subnets Feb 23 04:58:01 localhost dnsmasq[315995]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:58:01 localhost dnsmasq[315995]: warning: no upstream servers configured Feb 23 04:58:01 localhost dnsmasq-dhcp[315995]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:58:01 localhost dnsmasq[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/addn_hosts - 0 addresses Feb 23 04:58:01 localhost dnsmasq-dhcp[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/host Feb 23 04:58:01 localhost dnsmasq-dhcp[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/opts Feb 23 04:58:01 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 2 addresses Feb 23 04:58:01 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:58:01 localhost podman[315982]: 2026-02-23 09:58:01.514173453 +0000 UTC m=+0.086490355 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 23 04:58:01 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:58:01 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:01.598 263679 INFO neutron.agent.dhcp.agent [None req-2c9fb997-28eb-400f-a74b-4c50d2b6baae - - - - - -] DHCP configuration for ports {'0e314a45-0d3b-441b-8a9f-fddf2f3b57a1'} is completed#033[00m Feb 23 04:58:01 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:01.948 2 INFO neutron.agent.securitygroups_rpc [None req-1398d805-3ae6-40de-8906-b5cfcdc73dab b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:58:01 localhost openstack_network_exporter[243519]: ERROR 09:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:58:01 localhost openstack_network_exporter[243519]: Feb 23 04:58:01 localhost openstack_network_exporter[243519]: ERROR 09:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:58:01 localhost openstack_network_exporter[243519]: Feb 23 04:58:02 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e139 e139: 6 total, 6 up, 6 in Feb 23 04:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:58:02 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 1 addresses Feb 23 04:58:02 localhost podman[316025]: 2026-02-23 09:58:02.240190436 +0000 UTC m=+0.072515068 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 23 04:58:02 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:58:02 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:58:02 localhost podman[316032]: 2026-02-23 09:58:02.300400826 +0000 UTC m=+0.117894605 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.7, name=ubi9/ubi-minimal, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, vcs-type=git, build-date=2026-02-05T04:57:10Z, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 23 04:58:02 localhost podman[316026]: 2026-02-23 09:58:02.325294037 +0000 UTC m=+0.146569961 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:58:02 localhost podman[316032]: 2026-02-23 09:58:02.337404278 +0000 UTC m=+0.154898077 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.7, name=ubi9/ubi-minimal, architecture=x86_64, io.buildah.version=1.33.7) Feb 23 04:58:02 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:58:02 localhost podman[316026]: 2026-02-23 09:58:02.368029853 +0000 UTC m=+0.189305767 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 04:58:02 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:58:02 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:02.795 2 INFO neutron.agent.securitygroups_rpc [None req-7618cb81-6bfd-47c7-b5cf-c5e505798fed b34a74d70b104d598259f3881ae86305 8635084f010e445d861ab634b753fa27 - - default default] Security group member updated ['80d4c661-e254-4356-81d1-bc4c19a37e6b']#033[00m Feb 23 04:58:02 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:02.946 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:02Z, description=, device_id=54f5a813-ab0b-47f0-a6d1-e9a0141939d3, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d75bf4e2-af4a-404f-a124-ece359f55974, ip_allocation=immediate, mac_address=fa:16:3e:fe:33:d8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:57Z, description=, dns_domain=, id=e09301e8-3acf-46e4-a651-d747e131d2ae, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-2015726860, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28639, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1910, status=ACTIVE, subnets=['1bc0ced5-edae-4c12-b5a4-0f79e7badd61'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:57:59Z, vlan_transparent=None, network_id=e09301e8-3acf-46e4-a651-d747e131d2ae, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1942, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:02Z on network e09301e8-3acf-46e4-a651-d747e131d2ae#033[00m Feb 23 04:58:03 localhost nova_compute[280321]: 2026-02-23 09:58:03.106 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:03 localhost dnsmasq[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/addn_hosts - 0 addresses Feb 23 04:58:03 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/host Feb 23 04:58:03 localhost dnsmasq-dhcp[315284]: read /var/lib/neutron/dhcp/44f3daac-e320-4509-ada7-e5bee4e63e34/opts Feb 23 04:58:03 localhost podman[316102]: 2026-02-23 09:58:03.116799793 +0000 UTC m=+0.109906481 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:58:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 65 op/s Feb 23 04:58:03 localhost dnsmasq[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/addn_hosts - 1 addresses Feb 23 04:58:03 localhost dnsmasq-dhcp[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/host Feb 23 04:58:03 localhost podman[316134]: 2026-02-23 09:58:03.230339883 +0000 UTC m=+0.058336304 container kill e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e09301e8-3acf-46e4-a651-d747e131d2ae, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:58:03 localhost dnsmasq-dhcp[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/opts Feb 23 04:58:03 localhost dnsmasq[315542]: exiting on receipt of SIGTERM Feb 23 04:58:03 localhost podman[316175]: 2026-02-23 09:58:03.454542466 +0000 UTC m=+0.069242806 container kill 196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d73d663-9a0c-40f3-ba95-35aaf6a12f72, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 23 04:58:03 localhost systemd[1]: libpod-196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc.scope: Deactivated successfully. Feb 23 04:58:03 localhost podman[316189]: 2026-02-23 09:58:03.540106642 +0000 UTC m=+0.061942054 container died 196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d73d663-9a0c-40f3-ba95-35aaf6a12f72, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:58:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc-userdata-shm.mount: Deactivated successfully. Feb 23 04:58:03 localhost podman[316189]: 2026-02-23 09:58:03.570148141 +0000 UTC m=+0.091983503 container cleanup 196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d73d663-9a0c-40f3-ba95-35aaf6a12f72, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 23 04:58:03 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:03.573 263679 INFO neutron.agent.dhcp.agent [None req-ae091390-bf2a-4999-b63d-de2ad8e09928 - - - - - -] DHCP configuration for ports {'d75bf4e2-af4a-404f-a124-ece359f55974'} is completed#033[00m Feb 23 04:58:03 localhost systemd[1]: libpod-conmon-196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc.scope: Deactivated successfully. Feb 23 04:58:03 localhost podman[316191]: 2026-02-23 09:58:03.615667292 +0000 UTC m=+0.135075400 container remove 196f95bca4c912448f61c297b024e60aa6dbee6aa6ca2298083478ea21987abc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d73d663-9a0c-40f3-ba95-35aaf6a12f72, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:58:03 localhost nova_compute[280321]: 2026-02-23 09:58:03.649 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:03 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:03.859 263679 INFO neutron.agent.dhcp.agent [None req-cf15dc1e-8947-46a0-b333-1a3d7c6a6fbb - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:03 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:03.899 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:04 localhost dnsmasq[315284]: exiting on receipt of SIGTERM Feb 23 04:58:04 localhost podman[316234]: 2026-02-23 09:58:04.377138169 +0000 UTC m=+0.055997923 container kill 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:58:04 localhost systemd[1]: libpod-2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe.scope: Deactivated successfully. Feb 23 04:58:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:04.423 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:04 localhost podman[316248]: 2026-02-23 09:58:04.449496841 +0000 UTC m=+0.048172334 container died 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:58:04 localhost systemd[1]: var-lib-containers-storage-overlay-d6ade5b29bf3aecb7305aa3b8751dbeb62f3ffb237572ee8927ce1dc2c83b526-merged.mount: Deactivated successfully. Feb 23 04:58:04 localhost systemd[1]: run-netns-qdhcp\x2d7d73d663\x2d9a0c\x2d40f3\x2dba95\x2d35aaf6a12f72.mount: Deactivated successfully. Feb 23 04:58:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe-userdata-shm.mount: Deactivated successfully. Feb 23 04:58:04 localhost systemd[1]: var-lib-containers-storage-overlay-14d9760efb9ca272e8a8e0b660de387af53648624fe1cbfc672570f210c6b7f3-merged.mount: Deactivated successfully. Feb 23 04:58:04 localhost podman[316248]: 2026-02-23 09:58:04.552341464 +0000 UTC m=+0.151016967 container remove 2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-44f3daac-e320-4509-ada7-e5bee4e63e34, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:58:04 localhost systemd[1]: libpod-conmon-2c8f7c245d7f0d4bd223bff9e5170a89a38f1bb4e98efdba315badfbbac4defe.scope: Deactivated successfully. Feb 23 04:58:04 localhost ovn_controller[155966]: 2026-02-23T09:58:04Z|00244|binding|INFO|Releasing lport 105a6224-50e5-41ab-b7fa-43bf3e4e9e34 from this chassis (sb_readonly=0) Feb 23 04:58:04 localhost kernel: device tap105a6224-50 left promiscuous mode Feb 23 04:58:04 localhost nova_compute[280321]: 2026-02-23 09:58:04.597 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:04 localhost ovn_controller[155966]: 2026-02-23T09:58:04Z|00245|binding|INFO|Setting lport 105a6224-50e5-41ab-b7fa-43bf3e4e9e34 down in Southbound Feb 23 04:58:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:04.606 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-44f3daac-e320-4509-ada7-e5bee4e63e34', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44f3daac-e320-4509-ada7-e5bee4e63e34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8635084f010e445d861ab634b753fa27', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7e9201-dc5e-4a76-b2c0-58b555faf24a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=105a6224-50e5-41ab-b7fa-43bf3e4e9e34) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:04.608 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 105a6224-50e5-41ab-b7fa-43bf3e4e9e34 in datapath 44f3daac-e320-4509-ada7-e5bee4e63e34 unbound from our chassis#033[00m Feb 23 04:58:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:04.610 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 44f3daac-e320-4509-ada7-e5bee4e63e34 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:58:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:04.611 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[b79b50ea-dd9b-4f13-b7c8-d7aa5abb4f5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:04 localhost nova_compute[280321]: 2026-02-23 09:58:04.621 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:04.629 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:02Z, description=, device_id=54f5a813-ab0b-47f0-a6d1-e9a0141939d3, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d75bf4e2-af4a-404f-a124-ece359f55974, ip_allocation=immediate, mac_address=fa:16:3e:fe:33:d8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:57:57Z, description=, dns_domain=, id=e09301e8-3acf-46e4-a651-d747e131d2ae, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-2015726860, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28639, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1910, status=ACTIVE, subnets=['1bc0ced5-edae-4c12-b5a4-0f79e7badd61'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:57:59Z, vlan_transparent=None, network_id=e09301e8-3acf-46e4-a651-d747e131d2ae, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1942, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:02Z on network e09301e8-3acf-46e4-a651-d747e131d2ae#033[00m Feb 23 04:58:04 localhost nova_compute[280321]: 2026-02-23 09:58:04.819 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:04.849 263679 INFO neutron.agent.dhcp.agent [None req-99a13084-4f6e-408b-8827-07df29fe8956 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:04.850 263679 INFO neutron.agent.dhcp.agent [None req-99a13084-4f6e-408b-8827-07df29fe8956 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:04 localhost dnsmasq[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/addn_hosts - 1 addresses Feb 23 04:58:04 localhost dnsmasq-dhcp[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/host Feb 23 04:58:04 localhost podman[316294]: 2026-02-23 09:58:04.87083101 +0000 UTC m=+0.070444854 container kill e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e09301e8-3acf-46e4-a651-d747e131d2ae, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0) Feb 23 04:58:04 localhost dnsmasq-dhcp[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/opts Feb 23 04:58:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:58:05 Feb 23 04:58:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:58:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 04:58:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['images', 'volumes', 'vms', 'manila_metadata', 'backups', '.mgr', 'manila_data'] Feb 23 04:58:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 04:58:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:58:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:58:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:58:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:58:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:58:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.2 KiB/s wr, 64 op/s Feb 23 04:58:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:58:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:05.205 263679 INFO neutron.agent.dhcp.agent [None req-b7ab0820-f93b-4a86-a8bc-d420abbace95 - - - - - -] DHCP configuration for ports {'d75bf4e2-af4a-404f-a124-ece359f55974'} is completed#033[00m Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021774090359203424 quantized to 32 (current 32) Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:58:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Feb 23 04:58:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:58:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:58:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:58:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:58:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:58:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:58:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:58:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:58:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:58:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:58:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:05.334 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:05 localhost systemd[1]: run-netns-qdhcp\x2d44f3daac\x2de320\x2d4509\x2dada7\x2de5bee4e63e34.mount: Deactivated successfully. Feb 23 04:58:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:05.845 263679 INFO neutron.agent.linux.ip_lib [None req-3acb1c18-f471-449d-a90b-e7e5f2989ced - - - - - -] Device tap7a9dcd21-c8 cannot be used as it has no MAC address#033[00m Feb 23 04:58:05 localhost nova_compute[280321]: 2026-02-23 09:58:05.911 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:05 localhost kernel: device tap7a9dcd21-c8 entered promiscuous mode Feb 23 04:58:05 localhost ovn_controller[155966]: 2026-02-23T09:58:05Z|00246|binding|INFO|Claiming lport 7a9dcd21-c8b2-478c-b896-4f8a422644c3 for this chassis. Feb 23 04:58:05 localhost ovn_controller[155966]: 2026-02-23T09:58:05Z|00247|binding|INFO|7a9dcd21-c8b2-478c-b896-4f8a422644c3: Claiming unknown Feb 23 04:58:05 localhost NetworkManager[5987]: [1771840685.9200] manager: (tap7a9dcd21-c8): new Generic device (/org/freedesktop/NetworkManager/Devices/47) Feb 23 04:58:05 localhost nova_compute[280321]: 2026-02-23 09:58:05.921 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:05 localhost systemd-udevd[316324]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:58:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:05.930 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-21e03ffd-eb4a-4654-a33c-972e5093a567', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21e03ffd-eb4a-4654-a33c-972e5093a567', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '182b0ebb06754cfab10ebabcdf7056ed', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b6da24e-fe64-45df-8028-adca1dd688d3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7a9dcd21-c8b2-478c-b896-4f8a422644c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:05.932 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 7a9dcd21-c8b2-478c-b896-4f8a422644c3 in datapath 21e03ffd-eb4a-4654-a33c-972e5093a567 bound to our chassis#033[00m Feb 23 04:58:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:05.935 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 21e03ffd-eb4a-4654-a33c-972e5093a567 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:58:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:05.937 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d0dc67-f370-4b2b-be74-ec9b066e4703]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:05 localhost journal[229268]: ethtool ioctl error on tap7a9dcd21-c8: No such device Feb 23 04:58:05 localhost ovn_controller[155966]: 2026-02-23T09:58:05Z|00248|binding|INFO|Setting lport 7a9dcd21-c8b2-478c-b896-4f8a422644c3 ovn-installed in OVS Feb 23 04:58:05 localhost ovn_controller[155966]: 2026-02-23T09:58:05Z|00249|binding|INFO|Setting lport 7a9dcd21-c8b2-478c-b896-4f8a422644c3 up in Southbound Feb 23 04:58:05 localhost nova_compute[280321]: 2026-02-23 09:58:05.960 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:05 localhost journal[229268]: ethtool ioctl error on tap7a9dcd21-c8: No such device Feb 23 04:58:05 localhost journal[229268]: ethtool ioctl error on tap7a9dcd21-c8: No such device Feb 23 04:58:05 localhost journal[229268]: ethtool ioctl error on tap7a9dcd21-c8: No such device Feb 23 04:58:05 localhost journal[229268]: ethtool ioctl error on tap7a9dcd21-c8: No such device Feb 23 04:58:05 localhost journal[229268]: ethtool ioctl error on tap7a9dcd21-c8: No such device Feb 23 04:58:05 localhost journal[229268]: ethtool ioctl error on tap7a9dcd21-c8: No such device Feb 23 04:58:05 localhost journal[229268]: ethtool ioctl error on tap7a9dcd21-c8: No such device Feb 23 04:58:06 localhost nova_compute[280321]: 2026-02-23 09:58:06.003 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:06 localhost nova_compute[280321]: 2026-02-23 09:58:06.040 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:06 localhost sshd[316365]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:58:06 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:06.391 2 INFO neutron.agent.securitygroups_rpc [None req-f1c412d9-695e-47a0-9d6e-ba30fd3bc526 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:06 localhost dnsmasq[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/addn_hosts - 0 addresses Feb 23 04:58:06 localhost dnsmasq-dhcp[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/host Feb 23 04:58:06 localhost podman[316386]: 2026-02-23 09:58:06.50191877 +0000 UTC m=+0.070921439 container kill e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e09301e8-3acf-46e4-a651-d747e131d2ae, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:58:06 localhost dnsmasq-dhcp[315995]: read /var/lib/neutron/dhcp/e09301e8-3acf-46e4-a651-d747e131d2ae/opts Feb 23 04:58:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:58:06 localhost systemd[1]: tmp-crun.mJZNRX.mount: Deactivated successfully. Feb 23 04:58:06 localhost podman[316402]: 2026-02-23 09:58:06.628797028 +0000 UTC m=+0.100630886 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true) Feb 23 04:58:06 localhost podman[316402]: 2026-02-23 09:58:06.66255851 +0000 UTC m=+0.134392288 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:58:06 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:58:06 localhost ovn_controller[155966]: 2026-02-23T09:58:06Z|00250|binding|INFO|Releasing lport 14306afc-e15b-42d0-b398-ee5dd9ebaeb0 from this chassis (sb_readonly=0) Feb 23 04:58:06 localhost ovn_controller[155966]: 2026-02-23T09:58:06Z|00251|binding|INFO|Setting lport 14306afc-e15b-42d0-b398-ee5dd9ebaeb0 down in Southbound Feb 23 04:58:06 localhost kernel: device tap14306afc-e1 left promiscuous mode Feb 23 04:58:06 localhost nova_compute[280321]: 2026-02-23 09:58:06.686 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:06.694 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e09301e8-3acf-46e4-a651-d747e131d2ae', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e09301e8-3acf-46e4-a651-d747e131d2ae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5171e39-61d3-464d-9996-dd01dabd2303, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=14306afc-e15b-42d0-b398-ee5dd9ebaeb0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:06.696 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 14306afc-e15b-42d0-b398-ee5dd9ebaeb0 in datapath e09301e8-3acf-46e4-a651-d747e131d2ae unbound from our chassis#033[00m Feb 23 04:58:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:06.700 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e09301e8-3acf-46e4-a651-d747e131d2ae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:58:06 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:06.701 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[665b3390-86b3-467d-b7d5-d3ffa08f303d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:06 localhost nova_compute[280321]: 2026-02-23 09:58:06.703 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:07 localhost podman[316462]: Feb 23 04:58:07 localhost podman[316462]: 2026-02-23 09:58:07.014684205 +0000 UTC m=+0.098050799 container create dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-21e03ffd-eb4a-4654-a33c-972e5093a567, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 04:58:07 localhost systemd[1]: Started libpod-conmon-dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06.scope. Feb 23 04:58:07 localhost podman[316462]: 2026-02-23 09:58:06.969421221 +0000 UTC m=+0.052787865 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:58:07 localhost systemd[1]: Started libcrun container. Feb 23 04:58:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e3791132124969ec874d3332874b33759df763aa332be98eee0022e889f133/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:58:07 localhost podman[316462]: 2026-02-23 09:58:07.091245296 +0000 UTC m=+0.174611880 container init dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-21e03ffd-eb4a-4654-a33c-972e5093a567, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216) Feb 23 04:58:07 localhost podman[316462]: 2026-02-23 09:58:07.099307621 +0000 UTC m=+0.182674215 container start dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-21e03ffd-eb4a-4654-a33c-972e5093a567, tcib_managed=true, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:58:07 localhost dnsmasq[316480]: started, version 2.85 cachesize 150 Feb 23 04:58:07 localhost dnsmasq[316480]: DNS service limited to local subnets Feb 23 04:58:07 localhost dnsmasq[316480]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:58:07 localhost dnsmasq[316480]: warning: no upstream servers configured Feb 23 04:58:07 localhost dnsmasq-dhcp[316480]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:58:07 localhost dnsmasq[316480]: read /var/lib/neutron/dhcp/21e03ffd-eb4a-4654-a33c-972e5093a567/addn_hosts - 0 addresses Feb 23 04:58:07 localhost dnsmasq-dhcp[316480]: read /var/lib/neutron/dhcp/21e03ffd-eb4a-4654-a33c-972e5093a567/host Feb 23 04:58:07 localhost dnsmasq-dhcp[316480]: read /var/lib/neutron/dhcp/21e03ffd-eb4a-4654-a33c-972e5093a567/opts Feb 23 04:58:07 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:07.157 2 INFO neutron.agent.securitygroups_rpc [None req-a9451f2f-78ed-41db-9d83-c983c36607eb 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 2.0 KiB/s wr, 57 op/s Feb 23 04:58:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:07.269 263679 INFO neutron.agent.dhcp.agent [None req-66500fd8-3286-4617-9bb1-b523ee2ec294 - - - - - -] DHCP configuration for ports {'797bfb16-7137-4fd1-843e-1a0a1fd86748'} is completed#033[00m Feb 23 04:58:07 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:07.279 2 INFO neutron.agent.securitygroups_rpc [None req-c82f4f13-a6c6-4dfc-aae6-5892f71ca6d5 8ff2abb777c74a6dbae4721d46f0d17a 182b0ebb06754cfab10ebabcdf7056ed - - default default] Security group member updated ['c029b069-aec5-44a4-9af0-e58cbf64895c']#033[00m Feb 23 04:58:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:07.309 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:06Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f64fbb66-beda-4f8a-a361-4fcba16f5df1, ip_allocation=immediate, mac_address=fa:16:3e:6f:08:de, name=tempest-TagsExtTest-1845797451, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:58:02Z, description=, dns_domain=, id=21e03ffd-eb4a-4654-a33c-972e5093a567, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TagsExtTest-test-network-1282037559, port_security_enabled=True, project_id=182b0ebb06754cfab10ebabcdf7056ed, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12413, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1947, status=ACTIVE, subnets=['145fc8ea-0efa-4c87-95b3-bd4ac6f6cc41'], tags=[], tenant_id=182b0ebb06754cfab10ebabcdf7056ed, updated_at=2026-02-23T09:58:04Z, vlan_transparent=None, network_id=21e03ffd-eb4a-4654-a33c-972e5093a567, port_security_enabled=True, project_id=182b0ebb06754cfab10ebabcdf7056ed, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['c029b069-aec5-44a4-9af0-e58cbf64895c'], standard_attr_id=1973, status=DOWN, tags=[], tenant_id=182b0ebb06754cfab10ebabcdf7056ed, updated_at=2026-02-23T09:58:07Z on network 21e03ffd-eb4a-4654-a33c-972e5093a567#033[00m Feb 23 04:58:07 localhost dnsmasq[316480]: read /var/lib/neutron/dhcp/21e03ffd-eb4a-4654-a33c-972e5093a567/addn_hosts - 1 addresses Feb 23 04:58:07 localhost podman[316499]: 2026-02-23 09:58:07.558275131 +0000 UTC m=+0.074359563 container kill dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-21e03ffd-eb4a-4654-a33c-972e5093a567, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:58:07 localhost dnsmasq-dhcp[316480]: read /var/lib/neutron/dhcp/21e03ffd-eb4a-4654-a33c-972e5093a567/host Feb 23 04:58:07 localhost dnsmasq-dhcp[316480]: read /var/lib/neutron/dhcp/21e03ffd-eb4a-4654-a33c-972e5093a567/opts Feb 23 04:58:07 localhost dnsmasq[315995]: exiting on receipt of SIGTERM Feb 23 04:58:07 localhost podman[316529]: 2026-02-23 09:58:07.723839212 +0000 UTC m=+0.072870168 container kill e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e09301e8-3acf-46e4-a651-d747e131d2ae, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:58:07 localhost systemd[1]: libpod-e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346.scope: Deactivated successfully. Feb 23 04:58:07 localhost podman[316546]: 2026-02-23 09:58:07.803717914 +0000 UTC m=+0.062542062 container died e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e09301e8-3acf-46e4-a651-d747e131d2ae, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216) Feb 23 04:58:07 localhost podman[316546]: 2026-02-23 09:58:07.839424106 +0000 UTC m=+0.098248204 container cleanup e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e09301e8-3acf-46e4-a651-d747e131d2ae, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 04:58:07 localhost systemd[1]: libpod-conmon-e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346.scope: Deactivated successfully. Feb 23 04:58:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:07.846 263679 INFO neutron.agent.dhcp.agent [None req-6aaf9c96-0f54-4c06-a17a-2c3a9793ffc3 - - - - - -] DHCP configuration for ports {'f64fbb66-beda-4f8a-a361-4fcba16f5df1'} is completed#033[00m Feb 23 04:58:07 localhost podman[316548]: 2026-02-23 09:58:07.903686241 +0000 UTC m=+0.153032800 container remove e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e09301e8-3acf-46e4-a651-d747e131d2ae, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:58:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:07.937 263679 INFO neutron.agent.dhcp.agent [None req-2f3f3838-d4ef-4f1f-80d7-474056f290e7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:08.080 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:08 localhost nova_compute[280321]: 2026-02-23 09:58:08.111 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:08 localhost nova_compute[280321]: 2026-02-23 09:58:08.342 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:08 localhost systemd[1]: var-lib-containers-storage-overlay-2edc4cfad855da037dbfc9f6a35a0a5ad46961bff067b5efd637be9f4e8e704c-merged.mount: Deactivated successfully. Feb 23 04:58:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e9faed713120137b247d929ee3ad3da2394b38cf047df2ada2ec7e2b1a1fe346-userdata-shm.mount: Deactivated successfully. Feb 23 04:58:08 localhost systemd[1]: run-netns-qdhcp\x2de09301e8\x2d3acf\x2d46e4\x2da651\x2dd747e131d2ae.mount: Deactivated successfully. Feb 23 04:58:08 localhost nova_compute[280321]: 2026-02-23 09:58:08.691 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 1.8 KiB/s wr, 51 op/s Feb 23 04:58:09 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:09.760 2 INFO neutron.agent.securitygroups_rpc [None req-32f4623b-5152-4fb4-8665-550d3831cd54 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e140 e140: 6 total, 6 up, 6 in Feb 23 04:58:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:10.350 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:10 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:10.352 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:58:10 localhost nova_compute[280321]: 2026-02-23 09:58:10.352 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:10 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:10.985 2 INFO neutron.agent.securitygroups_rpc [None req-19ae9284-14e3-4a20-834d-20dede799690 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 673 B/s wr, 2 op/s Feb 23 04:58:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e141 e141: 6 total, 6 up, 6 in Feb 23 04:58:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:58:12 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/249112054' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:58:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:58:12 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/249112054' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:58:12 localhost podman[241086]: time="2026-02-23T09:58:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:58:12 localhost podman[241086]: @ - - [23/Feb/2026:09:58:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1" Feb 23 04:58:12 localhost podman[241086]: @ - - [23/Feb/2026:09:58:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18279 "" "Go-http-client/1.1" Feb 23 04:58:13 localhost nova_compute[280321]: 2026-02-23 09:58:13.115 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.1 KiB/s wr, 24 op/s Feb 23 04:58:13 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:13.354 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:58:13 localhost nova_compute[280321]: 2026-02-23 09:58:13.740 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:13 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:13.819 2 INFO neutron.agent.securitygroups_rpc [None req-eb522331-80d1-4b00-bd36-6fd8378962f5 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:58:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:58:15 localhost podman[316572]: 2026-02-23 09:58:15.022694418 +0000 UTC m=+0.087834056 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:58:15 localhost podman[316571]: 2026-02-23 09:58:15.075235235 +0000 UTC m=+0.141896069 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 04:58:15 localhost podman[316571]: 2026-02-23 09:58:15.083817397 +0000 UTC m=+0.150478241 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Feb 23 04:58:15 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:58:15 localhost podman[316572]: 2026-02-23 09:58:15.141940153 +0000 UTC m=+0.207079801 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 23 04:58:15 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:58:15 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:15.157 2 INFO neutron.agent.securitygroups_rpc [None req-4d44920c-4a63-4197-972a-c30d277ee529 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:15 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:15.162 2 INFO neutron.agent.securitygroups_rpc [None req-54919a2c-3a58-4a60-9867-0e5c23ba956a 8ff2abb777c74a6dbae4721d46f0d17a 182b0ebb06754cfab10ebabcdf7056ed - - default default] Security group member updated ['c029b069-aec5-44a4-9af0-e58cbf64895c']#033[00m Feb 23 04:58:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.1 KiB/s wr, 24 op/s Feb 23 04:58:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:15 localhost dnsmasq[316480]: read /var/lib/neutron/dhcp/21e03ffd-eb4a-4654-a33c-972e5093a567/addn_hosts - 0 addresses Feb 23 04:58:15 localhost podman[316623]: 2026-02-23 09:58:15.452036102 +0000 UTC m=+0.063094669 container kill dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-21e03ffd-eb4a-4654-a33c-972e5093a567, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:58:15 localhost dnsmasq-dhcp[316480]: read /var/lib/neutron/dhcp/21e03ffd-eb4a-4654-a33c-972e5093a567/host Feb 23 04:58:15 localhost dnsmasq-dhcp[316480]: read /var/lib/neutron/dhcp/21e03ffd-eb4a-4654-a33c-972e5093a567/opts Feb 23 04:58:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:58:15 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3497166256' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:58:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:58:15 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3497166256' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:58:15 localhost dnsmasq[316480]: exiting on receipt of SIGTERM Feb 23 04:58:15 localhost podman[316660]: 2026-02-23 09:58:15.89457964 +0000 UTC m=+0.070390052 container kill dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-21e03ffd-eb4a-4654-a33c-972e5093a567, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:58:15 localhost systemd[1]: libpod-dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06.scope: Deactivated successfully. Feb 23 04:58:15 localhost podman[316674]: 2026-02-23 09:58:15.973302897 +0000 UTC m=+0.060110098 container died dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-21e03ffd-eb4a-4654-a33c-972e5093a567, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:58:16 localhost podman[316674]: 2026-02-23 09:58:16.003935693 +0000 UTC m=+0.090742844 container cleanup dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-21e03ffd-eb4a-4654-a33c-972e5093a567, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:58:16 localhost systemd[1]: libpod-conmon-dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06.scope: Deactivated successfully. Feb 23 04:58:16 localhost systemd[1]: tmp-crun.hGQ1jr.mount: Deactivated successfully. Feb 23 04:58:16 localhost systemd[1]: var-lib-containers-storage-overlay-07e3791132124969ec874d3332874b33759df763aa332be98eee0022e889f133-merged.mount: Deactivated successfully. Feb 23 04:58:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06-userdata-shm.mount: Deactivated successfully. Feb 23 04:58:16 localhost podman[316676]: 2026-02-23 09:58:16.052933351 +0000 UTC m=+0.132225823 container remove dee6bb02881f590f5956c018701ef23b2e896da0bc6e79327695536bbae0fb06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-21e03ffd-eb4a-4654-a33c-972e5093a567, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:58:16 localhost ovn_controller[155966]: 2026-02-23T09:58:16Z|00252|binding|INFO|Releasing lport 7a9dcd21-c8b2-478c-b896-4f8a422644c3 from this chassis (sb_readonly=0) Feb 23 04:58:16 localhost ovn_controller[155966]: 2026-02-23T09:58:16Z|00253|binding|INFO|Setting lport 7a9dcd21-c8b2-478c-b896-4f8a422644c3 down in Southbound Feb 23 04:58:16 localhost kernel: device tap7a9dcd21-c8 left promiscuous mode Feb 23 04:58:16 localhost nova_compute[280321]: 2026-02-23 09:58:16.068 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:16 localhost nova_compute[280321]: 2026-02-23 09:58:16.090 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:16.092 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-21e03ffd-eb4a-4654-a33c-972e5093a567', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21e03ffd-eb4a-4654-a33c-972e5093a567', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '182b0ebb06754cfab10ebabcdf7056ed', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b6da24e-fe64-45df-8028-adca1dd688d3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7a9dcd21-c8b2-478c-b896-4f8a422644c3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:16.095 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 7a9dcd21-c8b2-478c-b896-4f8a422644c3 in datapath 21e03ffd-eb4a-4654-a33c-972e5093a567 unbound from our chassis#033[00m Feb 23 04:58:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:16.098 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21e03ffd-eb4a-4654-a33c-972e5093a567, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:58:16 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:16.099 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[62a3dda5-b5f4-4c2d-a72b-fb79b9959606]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:16 localhost systemd[1]: run-netns-qdhcp\x2d21e03ffd\x2deb4a\x2d4654\x2da33c\x2d972e5093a567.mount: Deactivated successfully. Feb 23 04:58:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:16.123 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 3.9 KiB/s wr, 88 op/s Feb 23 04:58:17 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:17.201 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:17 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:17.717 2 INFO neutron.agent.securitygroups_rpc [None req-d1c62e78-6e68-4888-a717-f17406167923 1a9e25d9a0c746578e1b6c457935b6c2 983d362fe1064ddd8f80d65a731f1168 - - default default] Security group member updated ['011ab8d8-354c-4fb1-b0db-21af2eca313e']#033[00m Feb 23 04:58:17 localhost nova_compute[280321]: 2026-02-23 09:58:17.882 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:18 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:18.041 2 INFO neutron.agent.securitygroups_rpc [None req-45386671-0a66-4623-b814-6d3841258b3c 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:18 localhost nova_compute[280321]: 2026-02-23 09:58:18.116 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:18 localhost nova_compute[280321]: 2026-02-23 09:58:18.742 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:18 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:18.766 2 INFO neutron.agent.securitygroups_rpc [None req-ad743b42-9d5f-46bc-bf0d-b2f432d91b64 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:18 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:18.886 263679 INFO neutron.agent.linux.ip_lib [None req-cfb42411-4e5b-4aa5-b1b7-c7be48ba811a - - - - - -] Device tap123a8639-7f cannot be used as it has no MAC address#033[00m Feb 23 04:58:18 localhost nova_compute[280321]: 2026-02-23 09:58:18.938 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:18 localhost kernel: device tap123a8639-7f entered promiscuous mode Feb 23 04:58:18 localhost ovn_controller[155966]: 2026-02-23T09:58:18Z|00254|binding|INFO|Claiming lport 123a8639-7f06-4037-9659-fabf434618b0 for this chassis. Feb 23 04:58:18 localhost ovn_controller[155966]: 2026-02-23T09:58:18Z|00255|binding|INFO|123a8639-7f06-4037-9659-fabf434618b0: Claiming unknown Feb 23 04:58:18 localhost nova_compute[280321]: 2026-02-23 09:58:18.947 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:18 localhost NetworkManager[5987]: [1771840698.9515] manager: (tap123a8639-7f): new Generic device (/org/freedesktop/NetworkManager/Devices/48) Feb 23 04:58:18 localhost systemd-udevd[316715]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:58:18 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:18.969 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e07cba29-ef48-459f-977e-be9d032d0685', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e07cba29-ef48-459f-977e-be9d032d0685', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5fad930-a931-4346-849c-27ecbcd006f8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=123a8639-7f06-4037-9659-fabf434618b0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:18 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:18.971 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 123a8639-7f06-4037-9659-fabf434618b0 in datapath e07cba29-ef48-459f-977e-be9d032d0685 bound to our chassis#033[00m Feb 23 04:58:18 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:18.973 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e07cba29-ef48-459f-977e-be9d032d0685 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:58:18 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:18.974 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[e85ade78-0a9f-4bb6-8a5e-79afd2a1aa90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:18 localhost journal[229268]: ethtool ioctl error on tap123a8639-7f: No such device Feb 23 04:58:18 localhost ovn_controller[155966]: 2026-02-23T09:58:18Z|00256|binding|INFO|Setting lport 123a8639-7f06-4037-9659-fabf434618b0 ovn-installed in OVS Feb 23 04:58:18 localhost ovn_controller[155966]: 2026-02-23T09:58:18Z|00257|binding|INFO|Setting lport 123a8639-7f06-4037-9659-fabf434618b0 up in Southbound Feb 23 04:58:18 localhost journal[229268]: ethtool ioctl error on tap123a8639-7f: No such device Feb 23 04:58:18 localhost nova_compute[280321]: 2026-02-23 09:58:18.994 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:19 localhost journal[229268]: ethtool ioctl error on tap123a8639-7f: No such device Feb 23 04:58:19 localhost journal[229268]: ethtool ioctl error on tap123a8639-7f: No such device Feb 23 04:58:19 localhost journal[229268]: ethtool ioctl error on tap123a8639-7f: No such device Feb 23 04:58:19 localhost journal[229268]: ethtool ioctl error on tap123a8639-7f: No such device Feb 23 04:58:19 localhost journal[229268]: ethtool ioctl error on tap123a8639-7f: No such device Feb 23 04:58:19 localhost journal[229268]: ethtool ioctl error on tap123a8639-7f: No such device Feb 23 04:58:19 localhost nova_compute[280321]: 2026-02-23 09:58:19.030 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:19 localhost nova_compute[280321]: 2026-02-23 09:58:19.070 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 3.5 KiB/s wr, 79 op/s Feb 23 04:58:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:58:20 localhost podman[316779]: 2026-02-23 09:58:20.013763466 +0000 UTC m=+0.089911446 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:58:20 localhost podman[316779]: 2026-02-23 09:58:20.028805936 +0000 UTC m=+0.104953936 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:58:20 localhost podman[316790]: Feb 23 04:58:20 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:58:20 localhost podman[316790]: 2026-02-23 09:58:20.056883763 +0000 UTC m=+0.115221109 container create d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e07cba29-ef48-459f-977e-be9d032d0685, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:58:20 localhost systemd[1]: Started libpod-conmon-d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d.scope. Feb 23 04:58:20 localhost podman[316790]: 2026-02-23 09:58:20.009100634 +0000 UTC m=+0.067438010 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:58:20 localhost systemd[1]: Started libcrun container. Feb 23 04:58:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/030492da671dc9cc1fde1c4533f4155797c77677ef07aa499492b8c4a0767628/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:58:20 localhost podman[316790]: 2026-02-23 09:58:20.131156961 +0000 UTC m=+0.189494307 container init d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e07cba29-ef48-459f-977e-be9d032d0685, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:58:20 localhost podman[316790]: 2026-02-23 09:58:20.140041922 +0000 UTC m=+0.198379278 container start d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e07cba29-ef48-459f-977e-be9d032d0685, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260216) Feb 23 04:58:20 localhost dnsmasq[316826]: started, version 2.85 cachesize 150 Feb 23 04:58:20 localhost dnsmasq[316826]: DNS service limited to local subnets Feb 23 04:58:20 localhost dnsmasq[316826]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:58:20 localhost dnsmasq[316826]: warning: no upstream servers configured Feb 23 04:58:20 localhost dnsmasq-dhcp[316826]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:58:20 localhost dnsmasq[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/addn_hosts - 0 addresses Feb 23 04:58:20 localhost dnsmasq-dhcp[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/host Feb 23 04:58:20 localhost dnsmasq-dhcp[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/opts Feb 23 04:58:20 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:20.329 263679 INFO neutron.agent.dhcp.agent [None req-85fb2c31-0c0f-42bc-86d2-6e1eca52ae96 - - - - - -] DHCP configuration for ports {'a4fba49a-3488-4148-abeb-5c1af4591d88'} is completed#033[00m Feb 23 04:58:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:20 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:20.749 2 INFO neutron.agent.securitygroups_rpc [None req-e6124d39-875a-4fc5-8c30-4d7caf025748 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 2.5 KiB/s wr, 68 op/s Feb 23 04:58:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:58:21 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:58:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:58:21 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:58:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:58:21 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev fb2ef042-c8e2-4bce-acbf-921fe34844c4 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:58:21 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev fb2ef042-c8e2-4bce-acbf-921fe34844c4 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:58:21 localhost ceph-mgr[285904]: [progress INFO root] Completed event fb2ef042-c8e2-4bce-acbf-921fe34844c4 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 04:58:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:58:21 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:58:21 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:21.578 2 INFO neutron.agent.securitygroups_rpc [None req-362b8c0d-6857-4758-a913-b5b9ee733cd1 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:21 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:21.618 2 INFO neutron.agent.securitygroups_rpc [None req-12654231-88d0-4565-b196-1dda145060e4 1a9e25d9a0c746578e1b6c457935b6c2 983d362fe1064ddd8f80d65a731f1168 - - default default] Security group member updated ['011ab8d8-354c-4fb1-b0db-21af2eca313e']#033[00m Feb 23 04:58:21 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:21.640 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:21Z, description=, device_id=042758c8-61c7-4a93-beb5-a65e42c62fb4, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=04af5d5f-80d2-4842-bd74-c36c9c0860fa, ip_allocation=immediate, mac_address=fa:16:3e:cf:5d:5c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:58:15Z, description=, dns_domain=, id=e07cba29-ef48-459f-977e-be9d032d0685, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-64592326, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=941, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2005, status=ACTIVE, subnets=['4cc688ca-b697-4c50-8b17-656cb5f3ed04'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:17Z, vlan_transparent=None, network_id=e07cba29-ef48-459f-977e-be9d032d0685, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2051, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:21Z on network e07cba29-ef48-459f-977e-be9d032d0685#033[00m Feb 23 04:58:21 localhost dnsmasq[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/addn_hosts - 1 addresses Feb 23 04:58:21 localhost dnsmasq-dhcp[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/host Feb 23 04:58:21 localhost dnsmasq-dhcp[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/opts Feb 23 04:58:21 localhost podman[316929]: 2026-02-23 09:58:21.896518539 +0000 UTC m=+0.067457091 container kill d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e07cba29-ef48-459f-977e-be9d032d0685, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 04:58:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e142 e142: 6 total, 6 up, 6 in Feb 23 04:58:22 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:22.243 263679 INFO neutron.agent.dhcp.agent [None req-b29211c1-787d-4850-bcc1-45ff83825b6b - - - - - -] DHCP configuration for ports {'04af5d5f-80d2-4842-bd74-c36c9c0860fa'} is completed#033[00m Feb 23 04:58:22 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:58:22 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:58:22 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:22.851 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:21Z, description=, device_id=042758c8-61c7-4a93-beb5-a65e42c62fb4, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=04af5d5f-80d2-4842-bd74-c36c9c0860fa, ip_allocation=immediate, mac_address=fa:16:3e:cf:5d:5c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:58:15Z, description=, dns_domain=, id=e07cba29-ef48-459f-977e-be9d032d0685, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-64592326, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=941, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2005, status=ACTIVE, subnets=['4cc688ca-b697-4c50-8b17-656cb5f3ed04'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:17Z, vlan_transparent=None, network_id=e07cba29-ef48-459f-977e-be9d032d0685, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2051, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:21Z on network e07cba29-ef48-459f-977e-be9d032d0685#033[00m Feb 23 04:58:22 localhost nova_compute[280321]: 2026-02-23 09:58:22.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:58:23 localhost dnsmasq[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/addn_hosts - 1 addresses Feb 23 04:58:23 localhost podman[316964]: 2026-02-23 09:58:23.08267571 +0000 UTC m=+0.063210762 container kill d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e07cba29-ef48-459f-977e-be9d032d0685, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:58:23 localhost dnsmasq-dhcp[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/host Feb 23 04:58:23 localhost dnsmasq-dhcp[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/opts Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.151 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 1.4 KiB/s wr, 51 op/s Feb 23 04:58:23 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:23.408 263679 INFO neutron.agent.dhcp.agent [None req-5f3895e3-155a-4684-bc94-228f84859c5a - - - - - -] DHCP configuration for ports {'04af5d5f-80d2-4842-bd74-c36c9c0860fa'} is completed#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.746 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:23 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:23.829 2 INFO neutron.agent.securitygroups_rpc [None req-843b8c0e-9c58-433a-b8a5-d142a0ae4b56 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.909 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.910 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.910 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.929 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.930 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.930 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.930 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:58:23 localhost nova_compute[280321]: 2026-02-23 09:58:23.932 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:58:24 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:58:24 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2779787032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:58:24 localhost nova_compute[280321]: 2026-02-23 09:58:24.373 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:58:24 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:24.458 2 INFO neutron.agent.securitygroups_rpc [None req-f2783f1b-db17-4abb-b550-2a7eaeb7f1e9 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:24 localhost nova_compute[280321]: 2026-02-23 09:58:24.587 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:58:24 localhost nova_compute[280321]: 2026-02-23 09:58:24.590 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11673MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:58:24 localhost nova_compute[280321]: 2026-02-23 09:58:24.590 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:58:24 localhost nova_compute[280321]: 2026-02-23 09:58:24.591 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:58:24 localhost nova_compute[280321]: 2026-02-23 09:58:24.686 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:58:24 localhost nova_compute[280321]: 2026-02-23 09:58:24.686 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:58:24 localhost nova_compute[280321]: 2026-02-23 09:58:24.713 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:58:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:58:25 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3065101984' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:58:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 800 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 1.4 KiB/s wr, 51 op/s Feb 23 04:58:25 localhost nova_compute[280321]: 2026-02-23 09:58:25.202 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:58:25 localhost nova_compute[280321]: 2026-02-23 09:58:25.209 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:58:25 localhost nova_compute[280321]: 2026-02-23 09:58:25.246 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:58:25 localhost nova_compute[280321]: 2026-02-23 09:58:25.249 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:58:25 localhost nova_compute[280321]: 2026-02-23 09:58:25.249 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:58:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:25 localhost dnsmasq[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/addn_hosts - 0 addresses Feb 23 04:58:25 localhost dnsmasq-dhcp[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/host Feb 23 04:58:25 localhost podman[317043]: 2026-02-23 09:58:25.462288664 +0000 UTC m=+0.052805583 container kill d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e07cba29-ef48-459f-977e-be9d032d0685, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 04:58:25 localhost dnsmasq-dhcp[316826]: read /var/lib/neutron/dhcp/e07cba29-ef48-459f-977e-be9d032d0685/opts Feb 23 04:58:25 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:58:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:58:25 localhost ovn_controller[155966]: 2026-02-23T09:58:25Z|00258|binding|INFO|Releasing lport 123a8639-7f06-4037-9659-fabf434618b0 from this chassis (sb_readonly=0) Feb 23 04:58:25 localhost nova_compute[280321]: 2026-02-23 09:58:25.634 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:25 localhost kernel: device tap123a8639-7f left promiscuous mode Feb 23 04:58:25 localhost ovn_controller[155966]: 2026-02-23T09:58:25Z|00259|binding|INFO|Setting lport 123a8639-7f06-4037-9659-fabf434618b0 down in Southbound Feb 23 04:58:25 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:25.644 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e07cba29-ef48-459f-977e-be9d032d0685', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e07cba29-ef48-459f-977e-be9d032d0685', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5fad930-a931-4346-849c-27ecbcd006f8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=123a8639-7f06-4037-9659-fabf434618b0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:25 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:25.646 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 123a8639-7f06-4037-9659-fabf434618b0 in datapath e07cba29-ef48-459f-977e-be9d032d0685 unbound from our chassis#033[00m Feb 23 04:58:25 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:25.648 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e07cba29-ef48-459f-977e-be9d032d0685, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:58:25 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:25.651 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[ceceb6ae-2443-47b5-af1d-603a6a1689cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:25 localhost nova_compute[280321]: 2026-02-23 09:58:25.655 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:26 localhost nova_compute[280321]: 2026-02-23 09:58:26.231 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:58:26 localhost nova_compute[280321]: 2026-02-23 09:58:26.232 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:58:26 localhost nova_compute[280321]: 2026-02-23 09:58:26.233 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:58:26 localhost nova_compute[280321]: 2026-02-23 09:58:26.233 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:58:26 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:26.248 2 INFO neutron.agent.securitygroups_rpc [None req-e81a47f4-2a39-4882-ae5e-f110dbf1c96f 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:58:26 localhost dnsmasq[316826]: exiting on receipt of SIGTERM Feb 23 04:58:26 localhost systemd[1]: libpod-d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d.scope: Deactivated successfully. Feb 23 04:58:26 localhost podman[317082]: 2026-02-23 09:58:26.640523924 +0000 UTC m=+0.052347160 container kill d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e07cba29-ef48-459f-977e-be9d032d0685, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:58:26 localhost podman[317097]: 2026-02-23 09:58:26.705482217 +0000 UTC m=+0.047211982 container died d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e07cba29-ef48-459f-977e-be9d032d0685, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:58:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d-userdata-shm.mount: Deactivated successfully. Feb 23 04:58:26 localhost podman[317097]: 2026-02-23 09:58:26.743451197 +0000 UTC m=+0.085180932 container cleanup d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e07cba29-ef48-459f-977e-be9d032d0685, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:58:26 localhost systemd[1]: libpod-conmon-d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d.scope: Deactivated successfully. Feb 23 04:58:26 localhost podman[317098]: 2026-02-23 09:58:26.803641235 +0000 UTC m=+0.138961544 container remove d84a0c40e5163b1c58d2ba2b5a0b86375542eb49f09e5ec8208382496ffd8c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e07cba29-ef48-459f-977e-be9d032d0685, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 04:58:26 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:26.848 263679 INFO neutron.agent.dhcp.agent [None req-7de9ab38-b41d-4876-ba2e-2d332793b05b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:26 localhost nova_compute[280321]: 2026-02-23 09:58:26.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:58:27 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:27.005 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:58:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 801 MiB used, 41 GiB / 42 GiB avail Feb 23 04:58:27 localhost nova_compute[280321]: 2026-02-23 09:58:27.250 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:27 localhost systemd[1]: var-lib-containers-storage-overlay-030492da671dc9cc1fde1c4533f4155797c77677ef07aa499492b8c4a0767628-merged.mount: Deactivated successfully. Feb 23 04:58:27 localhost systemd[1]: run-netns-qdhcp\x2de07cba29\x2def48\x2d459f\x2d977e\x2dbe9d032d0685.mount: Deactivated successfully. Feb 23 04:58:27 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:27.801 2 INFO neutron.agent.securitygroups_rpc [None req-99ddfa8e-0299-435b-a099-c8da64e3d700 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:28 localhost nova_compute[280321]: 2026-02-23 09:58:28.154 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:28 localhost nova_compute[280321]: 2026-02-23 09:58:28.778 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:28 localhost nova_compute[280321]: 2026-02-23 09:58:28.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:58:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 145 MiB data, 801 MiB used, 41 GiB / 42 GiB avail Feb 23 04:58:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 801 MiB used, 41 GiB / 42 GiB avail Feb 23 04:58:31 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:31.410 263679 INFO neutron.agent.linux.ip_lib [None req-4e27b308-5c98-4e92-b3c8-7440bf70d297 - - - - - -] Device tap9197be64-3b cannot be used as it has no MAC address#033[00m Feb 23 04:58:31 localhost nova_compute[280321]: 2026-02-23 09:58:31.475 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:31 localhost kernel: device tap9197be64-3b entered promiscuous mode Feb 23 04:58:31 localhost nova_compute[280321]: 2026-02-23 09:58:31.484 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:31 localhost NetworkManager[5987]: [1771840711.4869] manager: (tap9197be64-3b): new Generic device (/org/freedesktop/NetworkManager/Devices/49) Feb 23 04:58:31 localhost ovn_controller[155966]: 2026-02-23T09:58:31Z|00260|binding|INFO|Claiming lport 9197be64-3b5f-41f9-81e9-85047230fea1 for this chassis. Feb 23 04:58:31 localhost ovn_controller[155966]: 2026-02-23T09:58:31Z|00261|binding|INFO|9197be64-3b5f-41f9-81e9-85047230fea1: Claiming unknown Feb 23 04:58:31 localhost nova_compute[280321]: 2026-02-23 09:58:31.491 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:31 localhost systemd-udevd[317135]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:58:31 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:31.505 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-80a960fb-ec23-4087-bdee-274ab1c84980', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80a960fb-ec23-4087-bdee-274ab1c84980', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=154b8549-91e1-4bc5-9cc4-2596ffb4a636, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9197be64-3b5f-41f9-81e9-85047230fea1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:31 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:31.507 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 9197be64-3b5f-41f9-81e9-85047230fea1 in datapath 80a960fb-ec23-4087-bdee-274ab1c84980 bound to our chassis#033[00m Feb 23 04:58:31 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:31.512 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port 5c5131a9-87e9-4d64-be26-a3a00c721e28 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:58:31 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:31.513 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80a960fb-ec23-4087-bdee-274ab1c84980, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:58:31 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:31.514 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[df49b104-2b82-4d81-98cb-9c64d90f85ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:31 localhost ovn_controller[155966]: 2026-02-23T09:58:31Z|00262|binding|INFO|Setting lport 9197be64-3b5f-41f9-81e9-85047230fea1 ovn-installed in OVS Feb 23 04:58:31 localhost ovn_controller[155966]: 2026-02-23T09:58:31Z|00263|binding|INFO|Setting lport 9197be64-3b5f-41f9-81e9-85047230fea1 up in Southbound Feb 23 04:58:31 localhost journal[229268]: ethtool ioctl error on tap9197be64-3b: No such device Feb 23 04:58:31 localhost nova_compute[280321]: 2026-02-23 09:58:31.522 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:31 localhost journal[229268]: ethtool ioctl error on tap9197be64-3b: No such device Feb 23 04:58:31 localhost journal[229268]: ethtool ioctl error on tap9197be64-3b: No such device Feb 23 04:58:31 localhost journal[229268]: ethtool ioctl error on tap9197be64-3b: No such device Feb 23 04:58:31 localhost journal[229268]: ethtool ioctl error on tap9197be64-3b: No such device Feb 23 04:58:31 localhost journal[229268]: ethtool ioctl error on tap9197be64-3b: No such device Feb 23 04:58:31 localhost journal[229268]: ethtool ioctl error on tap9197be64-3b: No such device Feb 23 04:58:31 localhost journal[229268]: ethtool ioctl error on tap9197be64-3b: No such device Feb 23 04:58:31 localhost nova_compute[280321]: 2026-02-23 09:58:31.567 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:31 localhost nova_compute[280321]: 2026-02-23 09:58:31.602 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:31 localhost openstack_network_exporter[243519]: ERROR 09:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:58:31 localhost openstack_network_exporter[243519]: Feb 23 04:58:31 localhost openstack_network_exporter[243519]: ERROR 09:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:58:31 localhost openstack_network_exporter[243519]: Feb 23 04:58:32 localhost sshd[317184]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:58:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:58:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:58:32 localhost podman[317201]: 2026-02-23 09:58:32.534891625 +0000 UTC m=+0.098262361 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:58:32 localhost podman[317202]: 2026-02-23 09:58:32.581144888 +0000 UTC m=+0.138893113 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, io.openshift.tags=minimal rhel9, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 04:58:32 localhost podman[317201]: 2026-02-23 09:58:32.599285532 +0000 UTC m=+0.162656268 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 04:58:32 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:58:32 localhost podman[317202]: 2026-02-23 09:58:32.621214101 +0000 UTC m=+0.178962296 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, vendor=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container) Feb 23 04:58:32 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:58:32 localhost podman[317222]: Feb 23 04:58:32 localhost podman[317222]: 2026-02-23 09:58:32.669927788 +0000 UTC m=+0.186267798 container create 68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80a960fb-ec23-4087-bdee-274ab1c84980, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:58:32 localhost systemd[1]: Started libpod-conmon-68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0.scope. Feb 23 04:58:32 localhost podman[317222]: 2026-02-23 09:58:32.62182198 +0000 UTC m=+0.138162030 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:58:32 localhost systemd[1]: Started libcrun container. Feb 23 04:58:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1227918a190c51b73b8c57e62b2d5ba3ba00001b571ffada60453f410146d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:58:32 localhost podman[317222]: 2026-02-23 09:58:32.751631183 +0000 UTC m=+0.267971203 container init 68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80a960fb-ec23-4087-bdee-274ab1c84980, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:58:32 localhost podman[317222]: 2026-02-23 09:58:32.760338499 +0000 UTC m=+0.276678519 container start 68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80a960fb-ec23-4087-bdee-274ab1c84980, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 04:58:32 localhost dnsmasq[317268]: started, version 2.85 cachesize 150 Feb 23 04:58:32 localhost dnsmasq[317268]: DNS service limited to local subnets Feb 23 04:58:32 localhost dnsmasq[317268]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:58:32 localhost dnsmasq[317268]: warning: no upstream servers configured Feb 23 04:58:32 localhost dnsmasq-dhcp[317268]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:58:32 localhost dnsmasq[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/addn_hosts - 0 addresses Feb 23 04:58:32 localhost dnsmasq-dhcp[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/host Feb 23 04:58:32 localhost dnsmasq-dhcp[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/opts Feb 23 04:58:32 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:32.834 263679 INFO neutron.agent.dhcp.agent [None req-5abd97b1-40cd-46e6-aee7-ab979d1d9ca0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:30Z, description=, device_id=cfa27e09-aa47-4679-8e70-2bef8c0fc3b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5e20b962-e1a4-4fe4-b029-5835f59be57f, ip_allocation=immediate, mac_address=fa:16:3e:6a:35:a3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:58:27Z, description=, dns_domain=, id=80a960fb-ec23-4087-bdee-274ab1c84980, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1849626336, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17381, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2102, status=ACTIVE, subnets=['4b14d563-6ad3-4272-aed2-9246b85f9ff1'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:29Z, vlan_transparent=None, network_id=80a960fb-ec23-4087-bdee-274ab1c84980, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2133, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:30Z on network 80a960fb-ec23-4087-bdee-274ab1c84980#033[00m Feb 23 04:58:32 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:32.920 263679 INFO neutron.agent.dhcp.agent [None req-48643f07-c56e-4b7e-bbb8-85551de5356c - - - - - -] DHCP configuration for ports {'44b8c89d-d790-4bf6-b40f-731914e4fd1c'} is completed#033[00m Feb 23 04:58:33 localhost dnsmasq[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/addn_hosts - 1 addresses Feb 23 04:58:33 localhost dnsmasq-dhcp[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/host Feb 23 04:58:33 localhost dnsmasq-dhcp[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/opts Feb 23 04:58:33 localhost podman[317287]: 2026-02-23 09:58:33.058564116 +0000 UTC m=+0.056005751 container kill 68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80a960fb-ec23-4087-bdee-274ab1c84980, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 04:58:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 145 MiB data, 801 MiB used, 41 GiB / 42 GiB avail Feb 23 04:58:33 localhost nova_compute[280321]: 2026-02-23 09:58:33.201 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:33.213 263679 INFO neutron.agent.dhcp.agent [None req-00d9fa43-4c4d-4d1a-8217-8c98196fe109 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:30Z, description=, device_id=cfa27e09-aa47-4679-8e70-2bef8c0fc3b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5e20b962-e1a4-4fe4-b029-5835f59be57f, ip_allocation=immediate, mac_address=fa:16:3e:6a:35:a3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:58:27Z, description=, dns_domain=, id=80a960fb-ec23-4087-bdee-274ab1c84980, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1849626336, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17381, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2102, status=ACTIVE, subnets=['4b14d563-6ad3-4272-aed2-9246b85f9ff1'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:29Z, vlan_transparent=None, network_id=80a960fb-ec23-4087-bdee-274ab1c84980, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2133, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:30Z on network 80a960fb-ec23-4087-bdee-274ab1c84980#033[00m Feb 23 04:58:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:33.269 263679 INFO neutron.agent.dhcp.agent [None req-395e49dc-0bef-4fde-8cba-5fcee8368fbb - - - - - -] DHCP configuration for ports {'5e20b962-e1a4-4fe4-b029-5835f59be57f'} is completed#033[00m Feb 23 04:58:33 localhost nova_compute[280321]: 2026-02-23 09:58:33.320 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:33 localhost dnsmasq[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/addn_hosts - 1 addresses Feb 23 04:58:33 localhost podman[317326]: 2026-02-23 09:58:33.441404358 +0000 UTC m=+0.056562370 container kill 68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80a960fb-ec23-4087-bdee-274ab1c84980, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:58:33 localhost dnsmasq-dhcp[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/host Feb 23 04:58:33 localhost dnsmasq-dhcp[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/opts Feb 23 04:58:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:33.750 263679 INFO neutron.agent.dhcp.agent [None req-fc576b21-9711-4d80-852a-80d8263fd0f1 - - - - - -] DHCP configuration for ports {'5e20b962-e1a4-4fe4-b029-5835f59be57f'} is completed#033[00m Feb 23 04:58:33 localhost nova_compute[280321]: 2026-02-23 09:58:33.780 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:58:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:58:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:58:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:58:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:58:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:58:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 145 MiB data, 801 MiB used, 41 GiB / 42 GiB avail Feb 23 04:58:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:58:35 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3937103108' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:58:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:58:35 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3937103108' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:58:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:35.591 263679 INFO neutron.agent.linux.ip_lib [None req-55f03508-06b8-486e-bf74-af7220e954d8 - - - - - -] Device tap7914a983-4f cannot be used as it has no MAC address#033[00m Feb 23 04:58:35 localhost nova_compute[280321]: 2026-02-23 09:58:35.614 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:35 localhost kernel: device tap7914a983-4f entered promiscuous mode Feb 23 04:58:35 localhost ovn_controller[155966]: 2026-02-23T09:58:35Z|00264|binding|INFO|Claiming lport 7914a983-4f90-42ca-8761-93ccdd663c72 for this chassis. Feb 23 04:58:35 localhost ovn_controller[155966]: 2026-02-23T09:58:35Z|00265|binding|INFO|7914a983-4f90-42ca-8761-93ccdd663c72: Claiming unknown Feb 23 04:58:35 localhost nova_compute[280321]: 2026-02-23 09:58:35.623 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:35 localhost NetworkManager[5987]: [1771840715.6246] manager: (tap7914a983-4f): new Generic device (/org/freedesktop/NetworkManager/Devices/50) Feb 23 04:58:35 localhost systemd-udevd[317357]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:58:35 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:35.635 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fedece12-b962-4214-a4fc-e37055c54b85, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7914a983-4f90-42ca-8761-93ccdd663c72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:35 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:35.637 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 7914a983-4f90-42ca-8761-93ccdd663c72 in datapath ed02a1ed-b3d8-45fd-8e6b-eb2674618b77 bound to our chassis#033[00m Feb 23 04:58:35 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:35.639 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ed02a1ed-b3d8-45fd-8e6b-eb2674618b77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:58:35 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:35.640 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[8474ef33-f760-463b-bf15-68293649cb36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:35 localhost journal[229268]: ethtool ioctl error on tap7914a983-4f: No such device Feb 23 04:58:35 localhost ovn_controller[155966]: 2026-02-23T09:58:35Z|00266|binding|INFO|Setting lport 7914a983-4f90-42ca-8761-93ccdd663c72 ovn-installed in OVS Feb 23 04:58:35 localhost ovn_controller[155966]: 2026-02-23T09:58:35Z|00267|binding|INFO|Setting lport 7914a983-4f90-42ca-8761-93ccdd663c72 up in Southbound Feb 23 04:58:35 localhost journal[229268]: ethtool ioctl error on tap7914a983-4f: No such device Feb 23 04:58:35 localhost nova_compute[280321]: 2026-02-23 09:58:35.661 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:35 localhost journal[229268]: ethtool ioctl error on tap7914a983-4f: No such device Feb 23 04:58:35 localhost journal[229268]: ethtool ioctl error on tap7914a983-4f: No such device Feb 23 04:58:35 localhost journal[229268]: ethtool ioctl error on tap7914a983-4f: No such device Feb 23 04:58:35 localhost journal[229268]: ethtool ioctl error on tap7914a983-4f: No such device Feb 23 04:58:35 localhost journal[229268]: ethtool ioctl error on tap7914a983-4f: No such device Feb 23 04:58:35 localhost journal[229268]: ethtool ioctl error on tap7914a983-4f: No such device Feb 23 04:58:35 localhost nova_compute[280321]: 2026-02-23 09:58:35.707 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:35 localhost nova_compute[280321]: 2026-02-23 09:58:35.744 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:36 localhost podman[317428]: Feb 23 04:58:36 localhost podman[317428]: 2026-02-23 09:58:36.675615719 +0000 UTC m=+0.080739297 container create 63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216) Feb 23 04:58:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:58:36 localhost systemd[1]: Started libpod-conmon-63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406.scope. Feb 23 04:58:36 localhost systemd[1]: tmp-crun.u9xPAC.mount: Deactivated successfully. Feb 23 04:58:36 localhost podman[317428]: 2026-02-23 09:58:36.637586148 +0000 UTC m=+0.042709716 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:58:36 localhost systemd[1]: Started libcrun container. Feb 23 04:58:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab08652ff6aea153a8b386deaf41ac6e313c42768c87992cdd10eb125a5ba381/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:58:36 localhost podman[317428]: 2026-02-23 09:58:36.761593674 +0000 UTC m=+0.166717252 container init 63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:58:36 localhost dnsmasq[317457]: started, version 2.85 cachesize 150 Feb 23 04:58:36 localhost dnsmasq[317457]: DNS service limited to local subnets Feb 23 04:58:36 localhost dnsmasq[317457]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:58:36 localhost dnsmasq[317457]: warning: no upstream servers configured Feb 23 04:58:36 localhost dnsmasq-dhcp[317457]: DHCP, static leases only on 10.101.0.0, lease time 1d Feb 23 04:58:36 localhost dnsmasq[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/addn_hosts - 0 addresses Feb 23 04:58:36 localhost dnsmasq-dhcp[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/host Feb 23 04:58:36 localhost dnsmasq-dhcp[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/opts Feb 23 04:58:36 localhost podman[317441]: 2026-02-23 09:58:36.806158795 +0000 UTC m=+0.093113875 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 23 04:58:36 localhost podman[317428]: 2026-02-23 09:58:36.821078941 +0000 UTC m=+0.226202529 container start 63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:58:36 localhost podman[317441]: 2026-02-23 09:58:36.873323786 +0000 UTC m=+0.160278876 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:58:36 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:58:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:37.006 263679 INFO neutron.agent.dhcp.agent [None req-00797c98-2dda-4c81-b81f-b3e46a67b489 - - - - - -] DHCP configuration for ports {'9afdc0ae-55f4-4c96-a5c9-0c76292bcab6'} is completed#033[00m Feb 23 04:58:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:37.150 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:36Z, description=, device_id=cfa27e09-aa47-4679-8e70-2bef8c0fc3b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=df9f49b2-0560-44fc-a589-cf64dcdf0223, ip_allocation=immediate, mac_address=fa:16:3e:14:e7:5c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:58:33Z, description=, dns_domain=, id=ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-374357904, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50326, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2143, status=ACTIVE, subnets=['0e13544e-1fa2-480c-94fc-f32a7c47da38'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:34Z, vlan_transparent=None, network_id=ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2180, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:36Z on network ed02a1ed-b3d8-45fd-8e6b-eb2674618b77#033[00m Feb 23 04:58:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s Feb 23 04:58:37 localhost dnsmasq[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/addn_hosts - 1 addresses Feb 23 04:58:37 localhost dnsmasq-dhcp[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/host Feb 23 04:58:37 localhost podman[317489]: 2026-02-23 09:58:37.359535273 +0000 UTC m=+0.059855859 container kill 63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:58:37 localhost dnsmasq-dhcp[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/opts Feb 23 04:58:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:37.985 263679 INFO neutron.agent.dhcp.agent [None req-8c12c5d8-88af-42f6-a2a8-963e767fed61 - - - - - -] DHCP configuration for ports {'df9f49b2-0560-44fc-a589-cf64dcdf0223'} is completed#033[00m Feb 23 04:58:38 localhost nova_compute[280321]: 2026-02-23 09:58:38.236 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:38 localhost nova_compute[280321]: 2026-02-23 09:58:38.782 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:39 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:39.180 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:36Z, description=, device_id=cfa27e09-aa47-4679-8e70-2bef8c0fc3b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=df9f49b2-0560-44fc-a589-cf64dcdf0223, ip_allocation=immediate, mac_address=fa:16:3e:14:e7:5c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:58:33Z, description=, dns_domain=, id=ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-374357904, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50326, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2143, status=ACTIVE, subnets=['0e13544e-1fa2-480c-94fc-f32a7c47da38'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:34Z, vlan_transparent=None, network_id=ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2180, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:36Z on network ed02a1ed-b3d8-45fd-8e6b-eb2674618b77#033[00m Feb 23 04:58:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v286: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s Feb 23 04:58:39 localhost dnsmasq[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/addn_hosts - 1 addresses Feb 23 04:58:39 localhost dnsmasq-dhcp[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/host Feb 23 04:58:39 localhost dnsmasq-dhcp[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/opts Feb 23 04:58:39 localhost podman[317526]: 2026-02-23 09:58:39.392550924 +0000 UTC m=+0.057325412 container kill 63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 23 04:58:39 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:39.643 263679 INFO neutron.agent.dhcp.agent [None req-ffe34025-7c15-461d-b254-caafa6df2175 - - - - - -] DHCP configuration for ports {'df9f49b2-0560-44fc-a589-cf64dcdf0223'} is completed#033[00m Feb 23 04:58:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Feb 23 04:58:42 localhost podman[241086]: time="2026-02-23T09:58:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:58:42 localhost podman[241086]: @ - - [23/Feb/2026:09:58:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157718 "" "Go-http-client/1.1" Feb 23 04:58:42 localhost podman[241086]: @ - - [23/Feb/2026:09:58:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18756 "" "Go-http-client/1.1" Feb 23 04:58:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Feb 23 04:58:43 localhost nova_compute[280321]: 2026-02-23 09:58:43.238 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:43 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:43.388 263679 INFO neutron.agent.linux.ip_lib [None req-27b79ba6-54e5-4a1d-9a5b-7f0067a33a60 - - - - - -] Device tapc1b269e3-03 cannot be used as it has no MAC address#033[00m Feb 23 04:58:43 localhost nova_compute[280321]: 2026-02-23 09:58:43.410 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:43 localhost kernel: device tapc1b269e3-03 entered promiscuous mode Feb 23 04:58:43 localhost NetworkManager[5987]: [1771840723.4188] manager: (tapc1b269e3-03): new Generic device (/org/freedesktop/NetworkManager/Devices/51) Feb 23 04:58:43 localhost ovn_controller[155966]: 2026-02-23T09:58:43Z|00268|binding|INFO|Claiming lport c1b269e3-03be-4876-9aa2-da9bb7904c86 for this chassis. Feb 23 04:58:43 localhost nova_compute[280321]: 2026-02-23 09:58:43.419 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:43 localhost ovn_controller[155966]: 2026-02-23T09:58:43Z|00269|binding|INFO|c1b269e3-03be-4876-9aa2-da9bb7904c86: Claiming unknown Feb 23 04:58:43 localhost systemd-udevd[317556]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:58:43 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:43.433 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-eee30041-9c40-4b49-9eb7-b81e205f1280', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eee30041-9c40-4b49-9eb7-b81e205f1280', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e647e34f-c73b-449c-b79d-6ed8b86a97ab, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c1b269e3-03be-4876-9aa2-da9bb7904c86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:43 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:43.435 161842 INFO neutron.agent.ovn.metadata.agent [-] Port c1b269e3-03be-4876-9aa2-da9bb7904c86 in datapath eee30041-9c40-4b49-9eb7-b81e205f1280 bound to our chassis#033[00m Feb 23 04:58:43 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:43.439 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7c6d35c8-d21f-4757-a16d-5af461294b90 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:58:43 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:43.439 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eee30041-9c40-4b49-9eb7-b81e205f1280, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:58:43 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:43.441 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[a282efba-6fb1-4a75-a8d0-44cc6555e07d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:43 localhost ovn_controller[155966]: 2026-02-23T09:58:43Z|00270|binding|INFO|Setting lport c1b269e3-03be-4876-9aa2-da9bb7904c86 ovn-installed in OVS Feb 23 04:58:43 localhost journal[229268]: ethtool ioctl error on tapc1b269e3-03: No such device Feb 23 04:58:43 localhost nova_compute[280321]: 2026-02-23 09:58:43.451 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:43 localhost ovn_controller[155966]: 2026-02-23T09:58:43Z|00271|binding|INFO|Setting lport c1b269e3-03be-4876-9aa2-da9bb7904c86 up in Southbound Feb 23 04:58:43 localhost nova_compute[280321]: 2026-02-23 09:58:43.455 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:43 localhost journal[229268]: ethtool ioctl error on tapc1b269e3-03: No such device Feb 23 04:58:43 localhost journal[229268]: ethtool ioctl error on tapc1b269e3-03: No such device Feb 23 04:58:43 localhost journal[229268]: ethtool ioctl error on tapc1b269e3-03: No such device Feb 23 04:58:43 localhost journal[229268]: ethtool ioctl error on tapc1b269e3-03: No such device Feb 23 04:58:43 localhost journal[229268]: ethtool ioctl error on tapc1b269e3-03: No such device Feb 23 04:58:43 localhost journal[229268]: ethtool ioctl error on tapc1b269e3-03: No such device Feb 23 04:58:43 localhost journal[229268]: ethtool ioctl error on tapc1b269e3-03: No such device Feb 23 04:58:43 localhost nova_compute[280321]: 2026-02-23 09:58:43.490 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:43 localhost nova_compute[280321]: 2026-02-23 09:58:43.521 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:43 localhost nova_compute[280321]: 2026-02-23 09:58:43.819 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:44 localhost podman[317627]: Feb 23 04:58:44 localhost podman[317627]: 2026-02-23 09:58:44.450232258 +0000 UTC m=+0.094160056 container create 1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eee30041-9c40-4b49-9eb7-b81e205f1280, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:58:44 localhost systemd[1]: Started libpod-conmon-1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a.scope. Feb 23 04:58:44 localhost podman[317627]: 2026-02-23 09:58:44.404611185 +0000 UTC m=+0.048539013 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:58:44 localhost systemd[1]: Started libcrun container. Feb 23 04:58:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a1fe9abb7349173bb3ee241f9af5d445bb8b757cca342ebe7fbbef16d59b7f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:58:44 localhost podman[317627]: 2026-02-23 09:58:44.526692593 +0000 UTC m=+0.170620391 container init 1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eee30041-9c40-4b49-9eb7-b81e205f1280, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:58:44 localhost podman[317627]: 2026-02-23 09:58:44.535710128 +0000 UTC m=+0.179637926 container start 1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eee30041-9c40-4b49-9eb7-b81e205f1280, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:58:44 localhost dnsmasq[317646]: started, version 2.85 cachesize 150 Feb 23 04:58:44 localhost dnsmasq[317646]: DNS service limited to local subnets Feb 23 04:58:44 localhost dnsmasq[317646]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:58:44 localhost dnsmasq[317646]: warning: no upstream servers configured Feb 23 04:58:44 localhost dnsmasq-dhcp[317646]: DHCP, static leases only on 10.102.0.0, lease time 1d Feb 23 04:58:44 localhost dnsmasq[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/addn_hosts - 0 addresses Feb 23 04:58:44 localhost dnsmasq-dhcp[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/host Feb 23 04:58:44 localhost dnsmasq-dhcp[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/opts Feb 23 04:58:44 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:44.605 263679 INFO neutron.agent.dhcp.agent [None req-b4f70fbb-3bf7-4fc1-b522-33a14d493510 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:42Z, description=, device_id=cfa27e09-aa47-4679-8e70-2bef8c0fc3b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9e8349de-93b1-46a1-8047-4646bb06589e, ip_allocation=immediate, mac_address=fa:16:3e:ca:df:5d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:58:39Z, description=, dns_domain=, id=eee30041-9c40-4b49-9eb7-b81e205f1280, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-2105358584, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=11221, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2193, status=ACTIVE, subnets=['85f91c02-bafd-42c6-be4e-f48b39ab0be7'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:41Z, vlan_transparent=None, network_id=eee30041-9c40-4b49-9eb7-b81e205f1280, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2222, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:42Z on network eee30041-9c40-4b49-9eb7-b81e205f1280#033[00m Feb 23 04:58:44 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:44.724 263679 INFO neutron.agent.dhcp.agent [None req-fa1768e1-8cfa-4be9-9d99-9d12c09081be - - - - - -] DHCP configuration for ports {'36653785-7cf4-4673-b919-ad83903ea3ef'} is completed#033[00m Feb 23 04:58:44 localhost dnsmasq[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/addn_hosts - 1 addresses Feb 23 04:58:44 localhost dnsmasq-dhcp[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/host Feb 23 04:58:44 localhost dnsmasq-dhcp[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/opts Feb 23 04:58:44 localhost podman[317663]: 2026-02-23 09:58:44.847335654 +0000 UTC m=+0.063408828 container kill 1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eee30041-9c40-4b49-9eb7-b81e205f1280, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:58:45 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:45.067 263679 INFO neutron.agent.dhcp.agent [None req-2f48f514-4c2a-41e8-9fa9-a0b5942d218a - - - - - -] DHCP configuration for ports {'9e8349de-93b1-46a1-8047-4646bb06589e'} is completed#033[00m Feb 23 04:58:45 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:45.079 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:58:42Z, description=, device_id=cfa27e09-aa47-4679-8e70-2bef8c0fc3b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9e8349de-93b1-46a1-8047-4646bb06589e, ip_allocation=immediate, mac_address=fa:16:3e:ca:df:5d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:58:39Z, description=, dns_domain=, id=eee30041-9c40-4b49-9eb7-b81e205f1280, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-2105358584, port_security_enabled=True, project_id=6aadd525d3dd402cb701922115d00291, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=11221, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2193, status=ACTIVE, subnets=['85f91c02-bafd-42c6-be4e-f48b39ab0be7'], tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:41Z, vlan_transparent=None, network_id=eee30041-9c40-4b49-9eb7-b81e205f1280, port_security_enabled=False, project_id=6aadd525d3dd402cb701922115d00291, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2222, status=DOWN, tags=[], tenant_id=6aadd525d3dd402cb701922115d00291, updated_at=2026-02-23T09:58:42Z on network eee30041-9c40-4b49-9eb7-b81e205f1280#033[00m Feb 23 04:58:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Feb 23 04:58:45 localhost dnsmasq[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/addn_hosts - 1 addresses Feb 23 04:58:45 localhost dnsmasq-dhcp[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/host Feb 23 04:58:45 localhost dnsmasq-dhcp[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/opts Feb 23 04:58:45 localhost podman[317700]: 2026-02-23 09:58:45.304729661 +0000 UTC m=+0.061599022 container kill 1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eee30041-9c40-4b49-9eb7-b81e205f1280, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:58:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:58:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:58:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:45 localhost podman[317717]: 2026-02-23 09:58:45.520638004 +0000 UTC m=+0.091122433 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:58:45 localhost podman[317717]: 2026-02-23 09:58:45.556940852 +0000 UTC m=+0.127425281 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 04:58:45 localhost podman[317719]: 2026-02-23 09:58:45.567761623 +0000 UTC m=+0.135064435 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:58:45 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:58:45 localhost podman[317719]: 2026-02-23 09:58:45.605346231 +0000 UTC m=+0.172649093 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 04:58:45 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:58:45.620 263679 INFO neutron.agent.dhcp.agent [None req-8635cdf7-185c-4ac2-a691-b043bc0cc731 - - - - - -] DHCP configuration for ports {'9e8349de-93b1-46a1-8047-4646bb06589e'} is completed#033[00m Feb 23 04:58:45 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:58:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Feb 23 04:58:48 localhost nova_compute[280321]: 2026-02-23 09:58:48.277 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:48.314 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:58:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:48.314 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:58:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:48.314 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:58:48 localhost nova_compute[280321]: 2026-02-23 09:58:48.822 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 9.5 KiB/s rd, 255 B/s wr, 12 op/s Feb 23 04:58:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:58:50 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1412528347' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:58:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:58:50 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1412528347' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:58:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:58:51 localhost podman[317756]: 2026-02-23 09:58:51.009498475 +0000 UTC m=+0.082232172 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:58:51 localhost podman[317756]: 2026-02-23 09:58:51.019220722 +0000 UTC m=+0.091954409 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:58:51 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:58:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 767 B/s wr, 27 op/s Feb 23 04:58:51 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:51.391 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2 2001:db8::f816:3eff:fe4b:79d0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8f6a957-d664-4eea-abeb-3145dfe24d16, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f9f417b1-97ff-4889-a004-ebc80afbc25a) old=Port_Binding(mac=['fa:16:3e:4b:79:d0 2001:db8::f816:3eff:fe4b:79d0'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:51 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:51.393 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f9f417b1-97ff-4889-a004-ebc80afbc25a in datapath c73ae202-1b92-44f9-b55c-a6eba0a348b0 updated#033[00m Feb 23 04:58:51 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:51.397 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c73ae202-1b92-44f9-b55c-a6eba0a348b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:58:51 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:51.398 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[7620d22b-97b7-4b18-a8c2-b58fe1105922]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:52 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:52.187 2 INFO neutron.agent.securitygroups_rpc [None req-a57c617c-c4ee-4d55-b7f0-53311658d2fd 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:52 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:52.464 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:52 localhost nova_compute[280321]: 2026-02-23 09:58:52.463 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:52 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:52.467 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:58:53 localhost sshd[317781]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:58:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 145 MiB data, 805 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 767 B/s wr, 28 op/s Feb 23 04:58:53 localhost nova_compute[280321]: 2026-02-23 09:58:53.277 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:53 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:53.932 2 INFO neutron.agent.securitygroups_rpc [None req-bc2e33ea-0cb8-4312-bf00-811550151f9a 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:53 localhost nova_compute[280321]: 2026-02-23 09:58:53.968 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:54 localhost nova_compute[280321]: 2026-02-23 09:58:54.012 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:54 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:54.469 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:58:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 145 MiB data, 805 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 767 B/s wr, 28 op/s Feb 23 04:58:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 09:58:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 04:58:56 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:58:56 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3383097945' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:58:56 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:58:56 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3383097945' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:58:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:57.183 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:79:d0 2001:db8::f816:3eff:fe4b:79d0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8f6a957-d664-4eea-abeb-3145dfe24d16, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f9f417b1-97ff-4889-a004-ebc80afbc25a) old=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2 2001:db8::f816:3eff:fe4b:79d0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:57.185 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f9f417b1-97ff-4889-a004-ebc80afbc25a in datapath c73ae202-1b92-44f9-b55c-a6eba0a348b0 updated#033[00m Feb 23 04:58:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:57.190 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c73ae202-1b92-44f9-b55c-a6eba0a348b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:58:57 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:57.191 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4623ad-d7d0-48ec-aafd-05eb5f285b08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 201 MiB data, 834 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 1.9 MiB/s wr, 99 op/s Feb 23 04:58:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:58.207 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2 2001:db8::f816:3eff:fe4b:79d0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8f6a957-d664-4eea-abeb-3145dfe24d16, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f9f417b1-97ff-4889-a004-ebc80afbc25a) old=Port_Binding(mac=['fa:16:3e:4b:79:d0 2001:db8::f816:3eff:fe4b:79d0'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:58:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:58.209 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f9f417b1-97ff-4889-a004-ebc80afbc25a in datapath c73ae202-1b92-44f9-b55c-a6eba0a348b0 updated#033[00m Feb 23 04:58:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:58.212 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c73ae202-1b92-44f9-b55c-a6eba0a348b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:58:58 localhost ovn_metadata_agent[161837]: 2026-02-23 09:58:58.213 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[70c5dcf3-87ce-4a0f-adc1-c0645f786561]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:58:58 localhost nova_compute[280321]: 2026-02-23 09:58:58.317 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:58:58 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2980215869' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:58:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:58:58 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2980215869' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:58:58 localhost neutron_sriov_agent[256355]: 2026-02-23 09:58:58.834 2 INFO neutron.agent.securitygroups_rpc [None req-d199110d-6878-4918-921e-08b8a8f76fc7 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:58:58 localhost nova_compute[280321]: 2026-02-23 09:58:58.971 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:58:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 201 MiB data, 834 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 1.9 MiB/s wr, 99 op/s Feb 23 04:59:00 localhost nova_compute[280321]: 2026-02-23 09:59:00.048 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:00 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:00.148 2 INFO neutron.agent.securitygroups_rpc [None req-378c2f4f-1435-4dd5-9061-db29c2eb645b 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:59:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 8457 writes, 33K keys, 8457 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 8457 writes, 2122 syncs, 3.99 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3393 writes, 11K keys, 3393 commit groups, 1.0 writes per commit group, ingest: 9.98 MB, 0.02 MB/s#012Interval WAL: 3393 writes, 1474 syncs, 2.30 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 04:59:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 218 MiB data, 863 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 159 op/s Feb 23 04:59:01 localhost dnsmasq[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/addn_hosts - 0 addresses Feb 23 04:59:01 localhost dnsmasq-dhcp[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/host Feb 23 04:59:01 localhost dnsmasq-dhcp[317646]: read /var/lib/neutron/dhcp/eee30041-9c40-4b49-9eb7-b81e205f1280/opts Feb 23 04:59:01 localhost podman[317800]: 2026-02-23 09:59:01.21976855 +0000 UTC m=+0.062106968 container kill 1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eee30041-9c40-4b49-9eb7-b81e205f1280, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0) Feb 23 04:59:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e143 e143: 6 total, 6 up, 6 in Feb 23 04:59:01 localhost ovn_controller[155966]: 2026-02-23T09:59:01Z|00272|binding|INFO|Releasing lport c1b269e3-03be-4876-9aa2-da9bb7904c86 from this chassis (sb_readonly=0) Feb 23 04:59:01 localhost kernel: device tapc1b269e3-03 left promiscuous mode Feb 23 04:59:01 localhost ovn_controller[155966]: 2026-02-23T09:59:01Z|00273|binding|INFO|Setting lport c1b269e3-03be-4876-9aa2-da9bb7904c86 down in Southbound Feb 23 04:59:01 localhost nova_compute[280321]: 2026-02-23 09:59:01.431 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:01 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:01.441 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-eee30041-9c40-4b49-9eb7-b81e205f1280', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eee30041-9c40-4b49-9eb7-b81e205f1280', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e647e34f-c73b-449c-b79d-6ed8b86a97ab, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c1b269e3-03be-4876-9aa2-da9bb7904c86) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:01 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:01.443 161842 INFO neutron.agent.ovn.metadata.agent [-] Port c1b269e3-03be-4876-9aa2-da9bb7904c86 in datapath eee30041-9c40-4b49-9eb7-b81e205f1280 unbound from our chassis#033[00m Feb 23 04:59:01 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:01.448 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eee30041-9c40-4b49-9eb7-b81e205f1280, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:01 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:01.449 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[f0fce3db-417e-4764-882f-54c24d6f19c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:01 localhost nova_compute[280321]: 2026-02-23 09:59:01.453 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:01 localhost openstack_network_exporter[243519]: ERROR 09:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:59:01 localhost openstack_network_exporter[243519]: Feb 23 04:59:01 localhost openstack_network_exporter[243519]: ERROR 09:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:59:01 localhost openstack_network_exporter[243519]: Feb 23 04:59:01 localhost dnsmasq[317646]: exiting on receipt of SIGTERM Feb 23 04:59:01 localhost podman[317841]: 2026-02-23 09:59:01.992571208 +0000 UTC m=+0.063240251 container kill 1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eee30041-9c40-4b49-9eb7-b81e205f1280, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS) Feb 23 04:59:01 localhost systemd[1]: libpod-1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a.scope: Deactivated successfully. Feb 23 04:59:02 localhost podman[317852]: 2026-02-23 09:59:02.059807921 +0000 UTC m=+0.053400111 container died 1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eee30041-9c40-4b49-9eb7-b81e205f1280, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:59:02 localhost systemd[1]: tmp-crun.1p0dK5.mount: Deactivated successfully. Feb 23 04:59:02 localhost podman[317852]: 2026-02-23 09:59:02.105117956 +0000 UTC m=+0.098710086 container cleanup 1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eee30041-9c40-4b49-9eb7-b81e205f1280, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:59:02 localhost systemd[1]: libpod-conmon-1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a.scope: Deactivated successfully. Feb 23 04:59:02 localhost podman[317854]: 2026-02-23 09:59:02.21236095 +0000 UTC m=+0.198091680 container remove 1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eee30041-9c40-4b49-9eb7-b81e205f1280, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS) Feb 23 04:59:02 localhost systemd[1]: var-lib-containers-storage-overlay-d7a1fe9abb7349173bb3ee241f9af5d445bb8b757cca342ebe7fbbef16d59b7f-merged.mount: Deactivated successfully. Feb 23 04:59:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1722bc2b9cdc3a13c058d5eafbed670dd9a00abd8fd10c1f8a35336d284d214a-userdata-shm.mount: Deactivated successfully. Feb 23 04:59:02 localhost systemd[1]: run-netns-qdhcp\x2deee30041\x2d9c40\x2d4b49\x2d9eb7\x2db81e205f1280.mount: Deactivated successfully. Feb 23 04:59:02 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:02.242 263679 INFO neutron.agent.dhcp.agent [None req-28b8618f-4f0f-4c78-8af3-5ea61116d289 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:02 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:02.312 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:02 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e144 e144: 6 total, 6 up, 6 in Feb 23 04:59:02 localhost nova_compute[280321]: 2026-02-23 09:59:02.611 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:02 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:02.899 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8f6a957-d664-4eea-abeb-3145dfe24d16, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f9f417b1-97ff-4889-a004-ebc80afbc25a) old=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2 2001:db8::f816:3eff:fe4b:79d0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:02 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:02.901 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f9f417b1-97ff-4889-a004-ebc80afbc25a in datapath c73ae202-1b92-44f9-b55c-a6eba0a348b0 updated#033[00m Feb 23 04:59:02 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:02.905 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c73ae202-1b92-44f9-b55c-a6eba0a348b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:02 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:02.906 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[a53cc461-0df1-474b-b194-0127eaf6e13e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:59:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:59:03 localhost systemd[1]: tmp-crun.3FajD0.mount: Deactivated successfully. Feb 23 04:59:03 localhost podman[317885]: 2026-02-23 09:59:03.023995465 +0000 UTC m=+0.093408774 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.7, distribution-scope=public, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 04:59:03 localhost podman[317884]: 2026-02-23 09:59:03.069876206 +0000 UTC m=+0.143247996 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 04:59:03 localhost podman[317885]: 2026-02-23 09:59:03.091323671 +0000 UTC m=+0.160736970 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 04:59:03 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:59:03 localhost podman[317884]: 2026-02-23 09:59:03.107164024 +0000 UTC m=+0.180535794 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 04:59:03 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:59:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 192 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 147 KiB/s rd, 5.3 MiB/s wr, 216 op/s Feb 23 04:59:03 localhost nova_compute[280321]: 2026-02-23 09:59:03.318 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:03 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e145 e145: 6 total, 6 up, 6 in Feb 23 04:59:03 localhost dnsmasq[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/addn_hosts - 0 addresses Feb 23 04:59:03 localhost podman[317943]: 2026-02-23 09:59:03.880301903 +0000 UTC m=+0.073212507 container kill 63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2) Feb 23 04:59:03 localhost dnsmasq-dhcp[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/host Feb 23 04:59:03 localhost dnsmasq-dhcp[317457]: read /var/lib/neutron/dhcp/ed02a1ed-b3d8-45fd-8e6b-eb2674618b77/opts Feb 23 04:59:04 localhost nova_compute[280321]: 2026-02-23 09:59:04.029 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:04 localhost ovn_controller[155966]: 2026-02-23T09:59:04Z|00274|binding|INFO|Releasing lport 7914a983-4f90-42ca-8761-93ccdd663c72 from this chassis (sb_readonly=0) Feb 23 04:59:04 localhost kernel: device tap7914a983-4f left promiscuous mode Feb 23 04:59:04 localhost ovn_controller[155966]: 2026-02-23T09:59:04Z|00275|binding|INFO|Setting lport 7914a983-4f90-42ca-8761-93ccdd663c72 down in Southbound Feb 23 04:59:04 localhost nova_compute[280321]: 2026-02-23 09:59:04.078 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:04.093 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fedece12-b962-4214-a4fc-e37055c54b85, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7914a983-4f90-42ca-8761-93ccdd663c72) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:04 localhost nova_compute[280321]: 2026-02-23 09:59:04.094 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:04.095 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 7914a983-4f90-42ca-8761-93ccdd663c72 in datapath ed02a1ed-b3d8-45fd-8e6b-eb2674618b77 unbound from our chassis#033[00m Feb 23 04:59:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:04.099 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:04 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:04.100 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d6acadd1-0a41-4df5-9d57-0631b2f10a01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e146 e146: 6 total, 6 up, 6 in Feb 23 04:59:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 04:59:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 3638 syncs, 3.37 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6331 writes, 21K keys, 6331 commit groups, 1.0 writes per commit group, ingest: 19.81 MB, 0.03 MB/s#012Interval WAL: 6331 writes, 2705 syncs, 2.34 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 04:59:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_09:59:05 Feb 23 04:59:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 04:59:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 04:59:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['volumes', 'manila_data', 'manila_metadata', '.mgr', 'images', 'backups', 'vms'] Feb 23 04:59:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 04:59:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:59:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:59:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:59:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:59:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:59:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:59:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 192 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 511 B/s wr, 39 op/s Feb 23 04:59:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 04:59:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:59:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 04:59:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 04:59:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001484200516471245 of space, bias 1.0, pg target 0.2963453697887586 quantized to 32 (current 32) Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 3.6168981481481486e-05 quantized to 32 (current 32) Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 04:59:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Feb 23 04:59:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 04:59:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:59:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:59:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 04:59:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 04:59:05 localhost dnsmasq[317457]: exiting on receipt of SIGTERM Feb 23 04:59:05 localhost podman[317982]: 2026-02-23 09:59:05.435691017 +0000 UTC m=+0.059360923 container kill 63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:59:05 localhost systemd[1]: libpod-63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406.scope: Deactivated successfully. Feb 23 04:59:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:05 localhost podman[317995]: 2026-02-23 09:59:05.508405118 +0000 UTC m=+0.057060643 container died 63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 23 04:59:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:05.511 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2 2001:db8::f816:3eff:fe4b:79d0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8f6a957-d664-4eea-abeb-3145dfe24d16, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f9f417b1-97ff-4889-a004-ebc80afbc25a) old=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:05.513 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f9f417b1-97ff-4889-a004-ebc80afbc25a in datapath c73ae202-1b92-44f9-b55c-a6eba0a348b0 updated#033[00m Feb 23 04:59:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:05.517 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c73ae202-1b92-44f9-b55c-a6eba0a348b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:05 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:05.518 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[35670463-e2cf-4688-a564-86617e6dcdc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406-userdata-shm.mount: Deactivated successfully. Feb 23 04:59:05 localhost podman[317995]: 2026-02-23 09:59:05.535498146 +0000 UTC m=+0.084153601 container cleanup 63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 04:59:05 localhost systemd[1]: libpod-conmon-63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406.scope: Deactivated successfully. Feb 23 04:59:05 localhost podman[317996]: 2026-02-23 09:59:05.583301756 +0000 UTC m=+0.126050581 container remove 63aed8f57253d84d0ba89ad7bd8292616afb3ca9fb78f375a7e27983a39f8406 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed02a1ed-b3d8-45fd-8e6b-eb2674618b77, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:59:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:05.684 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:06 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:06.043 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:06 localhost systemd[1]: var-lib-containers-storage-overlay-ab08652ff6aea153a8b386deaf41ac6e313c42768c87992cdd10eb125a5ba381-merged.mount: Deactivated successfully. Feb 23 04:59:06 localhost systemd[1]: run-netns-qdhcp\x2ded02a1ed\x2db3d8\x2d45fd\x2d8e6b\x2deb2674618b77.mount: Deactivated successfully. Feb 23 04:59:06 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:06.661 2 INFO neutron.agent.securitygroups_rpc [None req-662ed715-f0df-4027-a61e-4a9b758296d7 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:06 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 23 04:59:06 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2170764385' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 23 04:59:06 localhost nova_compute[280321]: 2026-02-23 09:59:06.699 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:59:07 localhost systemd[1]: tmp-crun.be0kVi.mount: Deactivated successfully. Feb 23 04:59:07 localhost podman[318023]: 2026-02-23 09:59:07.024621619 +0000 UTC m=+0.099666715 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.schema-version=1.0) Feb 23 04:59:07 localhost podman[318023]: 2026-02-23 09:59:07.088910112 +0000 UTC m=+0.163955228 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 23 04:59:07 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:59:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 238 MiB data, 936 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 119 op/s Feb 23 04:59:07 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:07.920 2 INFO neutron.agent.securitygroups_rpc [None req-8305faf4-3c52-460c-b047-e282e2502f1e 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:07 localhost systemd[1]: tmp-crun.tyoM0h.mount: Deactivated successfully. Feb 23 04:59:07 localhost dnsmasq[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/addn_hosts - 0 addresses Feb 23 04:59:07 localhost dnsmasq-dhcp[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/host Feb 23 04:59:07 localhost dnsmasq-dhcp[317268]: read /var/lib/neutron/dhcp/80a960fb-ec23-4087-bdee-274ab1c84980/opts Feb 23 04:59:07 localhost podman[318066]: 2026-02-23 09:59:07.962347413 +0000 UTC m=+0.056548637 container kill 68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80a960fb-ec23-4087-bdee-274ab1c84980, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:59:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e147 e147: 6 total, 6 up, 6 in Feb 23 04:59:08 localhost nova_compute[280321]: 2026-02-23 09:59:08.320 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:08 localhost ovn_controller[155966]: 2026-02-23T09:59:08Z|00276|binding|INFO|Releasing lport 9197be64-3b5f-41f9-81e9-85047230fea1 from this chassis (sb_readonly=0) Feb 23 04:59:08 localhost ovn_controller[155966]: 2026-02-23T09:59:08Z|00277|binding|INFO|Setting lport 9197be64-3b5f-41f9-81e9-85047230fea1 down in Southbound Feb 23 04:59:08 localhost nova_compute[280321]: 2026-02-23 09:59:08.388 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:08 localhost kernel: device tap9197be64-3b left promiscuous mode Feb 23 04:59:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:08.396 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-80a960fb-ec23-4087-bdee-274ab1c84980', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80a960fb-ec23-4087-bdee-274ab1c84980', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aadd525d3dd402cb701922115d00291', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=154b8549-91e1-4bc5-9cc4-2596ffb4a636, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9197be64-3b5f-41f9-81e9-85047230fea1) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:08.397 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 9197be64-3b5f-41f9-81e9-85047230fea1 in datapath 80a960fb-ec23-4087-bdee-274ab1c84980 unbound from our chassis#033[00m Feb 23 04:59:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:08.399 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80a960fb-ec23-4087-bdee-274ab1c84980, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:08 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:08.399 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[6681d034-1f72-4ffc-b42d-a76746ba68ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:08 localhost nova_compute[280321]: 2026-02-23 09:59:08.405 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:59:08 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/159217426' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:59:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:59:08 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/159217426' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:59:09 localhost nova_compute[280321]: 2026-02-23 09:59:09.031 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:09 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:09.078 2 INFO neutron.agent.securitygroups_rpc [None req-0b1bb1a0-ad29-4db2-a82b-e96c70807da2 9903926d083041b9a33881e7cab5b89f c0dc7447f79a422a9af7dbd04780afa6 - - default default] Security group member updated ['abb3c63b-8b38-4dd7-99e4-d8f07472a5d2']#033[00m Feb 23 04:59:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 238 MiB data, 936 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 88 op/s Feb 23 04:59:09 localhost sshd[318090]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:59:09 localhost dnsmasq[317268]: exiting on receipt of SIGTERM Feb 23 04:59:09 localhost podman[318110]: 2026-02-23 09:59:09.556580376 +0000 UTC m=+0.059980082 container kill 68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80a960fb-ec23-4087-bdee-274ab1c84980, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:59:09 localhost systemd[1]: libpod-68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0.scope: Deactivated successfully. Feb 23 04:59:09 localhost podman[318124]: 2026-02-23 09:59:09.628002078 +0000 UTC m=+0.059713894 container died 68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80a960fb-ec23-4087-bdee-274ab1c84980, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:59:09 localhost podman[318124]: 2026-02-23 09:59:09.659129488 +0000 UTC m=+0.090841284 container cleanup 68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80a960fb-ec23-4087-bdee-274ab1c84980, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:59:09 localhost systemd[1]: libpod-conmon-68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0.scope: Deactivated successfully. Feb 23 04:59:09 localhost podman[318127]: 2026-02-23 09:59:09.707033461 +0000 UTC m=+0.127117463 container remove 68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80a960fb-ec23-4087-bdee-274ab1c84980, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:59:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e148 e148: 6 total, 6 up, 6 in Feb 23 04:59:10 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:10.244 263679 INFO neutron.agent.dhcp.agent [None req-5a26ed19-426e-4c2e-b2e8-1be5f366c991 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:10 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:10.245 263679 INFO neutron.agent.dhcp.agent [None req-5a26ed19-426e-4c2e-b2e8-1be5f366c991 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:10 localhost systemd[1]: var-lib-containers-storage-overlay-8e1227918a190c51b73b8c57e62b2d5ba3ba00001b571ffada60453f410146d5-merged.mount: Deactivated successfully. Feb 23 04:59:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-68474fcc22b3142f4f2fdf886d0a93ccdcb02f8fb13b0bb915dd0a2e69ee3aa0-userdata-shm.mount: Deactivated successfully. Feb 23 04:59:10 localhost systemd[1]: run-netns-qdhcp\x2d80a960fb\x2dec23\x2d4087\x2dbdee\x2d274ab1c84980.mount: Deactivated successfully. Feb 23 04:59:10 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:10.994 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 238 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 5.9 MiB/s wr, 147 op/s Feb 23 04:59:11 localhost nova_compute[280321]: 2026-02-23 09:59:11.286 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:11 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:11.305 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8f6a957-d664-4eea-abeb-3145dfe24d16, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f9f417b1-97ff-4889-a004-ebc80afbc25a) old=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2 2001:db8::f816:3eff:fe4b:79d0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:11 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:11.306 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f9f417b1-97ff-4889-a004-ebc80afbc25a in datapath c73ae202-1b92-44f9-b55c-a6eba0a348b0 updated#033[00m Feb 23 04:59:11 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:11.310 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c73ae202-1b92-44f9-b55c-a6eba0a348b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:11 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:11.311 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2b6ef0-8151-4720-a5be-ed0b4d7eb3dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:11 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:11.517 2 INFO neutron.agent.securitygroups_rpc [None req-3920dd90-663f-4a5e-863e-875f00aeb78d 9903926d083041b9a33881e7cab5b89f c0dc7447f79a422a9af7dbd04780afa6 - - default default] Security group member updated ['abb3c63b-8b38-4dd7-99e4-d8f07472a5d2']#033[00m Feb 23 04:59:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e149 e149: 6 total, 6 up, 6 in Feb 23 04:59:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:59:12 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2607662654' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:59:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:59:12 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2607662654' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:59:12 localhost podman[241086]: time="2026-02-23T09:59:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:59:12 localhost podman[241086]: @ - - [23/Feb/2026:09:59:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 04:59:12 localhost podman[241086]: @ - - [23/Feb/2026:09:59:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17819 "" "Go-http-client/1.1" Feb 23 04:59:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 238 MiB data, 972 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 187 op/s Feb 23 04:59:13 localhost nova_compute[280321]: 2026-02-23 09:59:13.323 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:13 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:13.783 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2 2001:db8::f816:3eff:fe4b:79d0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8f6a957-d664-4eea-abeb-3145dfe24d16, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f9f417b1-97ff-4889-a004-ebc80afbc25a) old=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:13 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:13.785 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f9f417b1-97ff-4889-a004-ebc80afbc25a in datapath c73ae202-1b92-44f9-b55c-a6eba0a348b0 updated#033[00m Feb 23 04:59:13 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:13.788 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c73ae202-1b92-44f9-b55c-a6eba0a348b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:13 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:13.790 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[44a9090a-de47-4781-937d-324d8813861b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:14 localhost nova_compute[280321]: 2026-02-23 09:59:14.033 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e150 e150: 6 total, 6 up, 6 in Feb 23 04:59:15 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:15.024 2 INFO neutron.agent.securitygroups_rpc [None req-785e188e-0451-4720-a843-201f1ea322a4 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 238 MiB data, 972 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 187 op/s Feb 23 04:59:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:15 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:15.778 2 INFO neutron.agent.securitygroups_rpc [None req-abe579f6-947b-47f0-ad4d-2e1c9d13dcd9 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:59:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:59:16 localhost podman[318155]: 2026-02-23 09:59:16.018944195 +0000 UTC m=+0.089276907 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2) Feb 23 04:59:16 localhost podman[318156]: 2026-02-23 09:59:16.076852123 +0000 UTC m=+0.141972866 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:59:16 localhost podman[318156]: 2026-02-23 09:59:16.08721015 +0000 UTC m=+0.152330823 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216) Feb 23 04:59:16 localhost podman[318155]: 2026-02-23 09:59:16.098828035 +0000 UTC m=+0.169160767 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:59:16 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:59:16 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:59:16 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:16.560 2 INFO neutron.agent.securitygroups_rpc [None req-b25a55ad-3164-4d9c-85c0-7cab33b9b16d f49fd8b6937445efab40892d03b375d7 0421515e6bb54dea8db3ed218999e195 - - default default] Security group rule updated ['c46df023-9a3e-4c54-a0bb-44b675220af4']#033[00m Feb 23 04:59:16 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:16.816 2 INFO neutron.agent.securitygroups_rpc [None req-e4424286-7161-4a51-a79b-4dabbb149f4e f49fd8b6937445efab40892d03b375d7 0421515e6bb54dea8db3ed218999e195 - - default default] Security group rule updated ['c46df023-9a3e-4c54-a0bb-44b675220af4']#033[00m Feb 23 04:59:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e151 e151: 6 total, 6 up, 6 in Feb 23 04:59:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 145 MiB data, 823 MiB used, 41 GiB / 42 GiB avail; 1.6 MiB/s rd, 478 KiB/s wr, 207 op/s Feb 23 04:59:18 localhost nova_compute[280321]: 2026-02-23 09:59:18.365 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:19 localhost nova_compute[280321]: 2026-02-23 09:59:19.035 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 145 MiB data, 823 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 4.6 KiB/s wr, 80 op/s Feb 23 04:59:19 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:19.270 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:79:d0 2001:db8::f816:3eff:fe4b:79d0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8f6a957-d664-4eea-abeb-3145dfe24d16, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f9f417b1-97ff-4889-a004-ebc80afbc25a) old=Port_Binding(mac=['fa:16:3e:4b:79:d0 10.100.0.2 2001:db8::f816:3eff:fe4b:79d0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:19 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:19.272 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f9f417b1-97ff-4889-a004-ebc80afbc25a in datapath c73ae202-1b92-44f9-b55c-a6eba0a348b0 updated#033[00m Feb 23 04:59:19 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:19.275 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c73ae202-1b92-44f9-b55c-a6eba0a348b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:19 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:19.276 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[1382987a-5e4c-4979-bed3-0cb008d2b4a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:19 localhost nova_compute[280321]: 2026-02-23 09:59:19.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:19 localhost nova_compute[280321]: 2026-02-23 09:59:19.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 23 04:59:19 localhost nova_compute[280321]: 2026-02-23 09:59:19.910 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 23 04:59:20 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:20.247 2 INFO neutron.agent.securitygroups_rpc [None req-22dd3c2a-2988-4b13-8ce0-dbc57aa028bb 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:59:21 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2346079799' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:59:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:59:21 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2346079799' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:59:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:59:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 145 MiB data, 838 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 6.6 KiB/s wr, 110 op/s Feb 23 04:59:21 localhost systemd[1]: tmp-crun.Vm3lhQ.mount: Deactivated successfully. Feb 23 04:59:21 localhost podman[318191]: 2026-02-23 09:59:21.230824988 +0000 UTC m=+0.090604118 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 04:59:21 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:21.233 2 INFO neutron.agent.securitygroups_rpc [None req-f686a950-083b-4059-aa56-fe5b5d1b4f8c 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:21 localhost podman[318191]: 2026-02-23 09:59:21.267784616 +0000 UTC m=+0.127563726 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 04:59:21 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:59:21 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:21.341 263679 INFO neutron.agent.linux.ip_lib [None req-1706b605-6172-4dd8-9649-f251383901c5 - - - - - -] Device tap96b084fa-12 cannot be used as it has no MAC address#033[00m Feb 23 04:59:21 localhost nova_compute[280321]: 2026-02-23 09:59:21.363 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:21 localhost kernel: device tap96b084fa-12 entered promiscuous mode Feb 23 04:59:21 localhost ovn_controller[155966]: 2026-02-23T09:59:21Z|00278|binding|INFO|Claiming lport 96b084fa-1207-4367-8e2b-dcd92497359e for this chassis. Feb 23 04:59:21 localhost NetworkManager[5987]: [1771840761.3730] manager: (tap96b084fa-12): new Generic device (/org/freedesktop/NetworkManager/Devices/52) Feb 23 04:59:21 localhost ovn_controller[155966]: 2026-02-23T09:59:21Z|00279|binding|INFO|96b084fa-1207-4367-8e2b-dcd92497359e: Claiming unknown Feb 23 04:59:21 localhost nova_compute[280321]: 2026-02-23 09:59:21.372 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:21 localhost systemd-udevd[318224]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:59:21 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:21.387 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-634b86db-4147-42c3-b055-72dbdf8593f3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-634b86db-4147-42c3-b055-72dbdf8593f3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '582130ae966043d38e47148509dbe266', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88d00d90-aa56-4423-ad85-ec995a1f88e5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=96b084fa-1207-4367-8e2b-dcd92497359e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:21 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:21.389 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 96b084fa-1207-4367-8e2b-dcd92497359e in datapath 634b86db-4147-42c3-b055-72dbdf8593f3 bound to our chassis#033[00m Feb 23 04:59:21 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:21.391 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 634b86db-4147-42c3-b055-72dbdf8593f3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:59:21 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:21.392 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[64a005b1-dbe5-45bc-aea0-eb79e9df9d84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:21 localhost journal[229268]: ethtool ioctl error on tap96b084fa-12: No such device Feb 23 04:59:21 localhost journal[229268]: ethtool ioctl error on tap96b084fa-12: No such device Feb 23 04:59:21 localhost ovn_controller[155966]: 2026-02-23T09:59:21Z|00280|binding|INFO|Setting lport 96b084fa-1207-4367-8e2b-dcd92497359e ovn-installed in OVS Feb 23 04:59:21 localhost ovn_controller[155966]: 2026-02-23T09:59:21Z|00281|binding|INFO|Setting lport 96b084fa-1207-4367-8e2b-dcd92497359e up in Southbound Feb 23 04:59:21 localhost nova_compute[280321]: 2026-02-23 09:59:21.420 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:21 localhost journal[229268]: ethtool ioctl error on tap96b084fa-12: No such device Feb 23 04:59:21 localhost journal[229268]: ethtool ioctl error on tap96b084fa-12: No such device Feb 23 04:59:21 localhost journal[229268]: ethtool ioctl error on tap96b084fa-12: No such device Feb 23 04:59:21 localhost journal[229268]: ethtool ioctl error on tap96b084fa-12: No such device Feb 23 04:59:21 localhost journal[229268]: ethtool ioctl error on tap96b084fa-12: No such device Feb 23 04:59:21 localhost journal[229268]: ethtool ioctl error on tap96b084fa-12: No such device Feb 23 04:59:21 localhost nova_compute[280321]: 2026-02-23 09:59:21.465 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:21 localhost nova_compute[280321]: 2026-02-23 09:59:21.494 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e152 e152: 6 total, 6 up, 6 in Feb 23 04:59:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:59:22 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1542723496' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:59:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:59:22 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1542723496' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:59:22 localhost podman[318347]: Feb 23 04:59:22 localhost podman[318347]: 2026-02-23 09:59:22.347306811 +0000 UTC m=+0.087184023 container create c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:59:22 localhost podman[318347]: 2026-02-23 09:59:22.301799911 +0000 UTC m=+0.041677153 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:59:22 localhost systemd[1]: Started libpod-conmon-c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760.scope. Feb 23 04:59:22 localhost systemd[1]: Started libcrun container. Feb 23 04:59:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0652886642e9b8976254714fbdcd59246d8eae9055800e34d39d7ccd9b327508/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:59:22 localhost podman[318347]: 2026-02-23 09:59:22.424843779 +0000 UTC m=+0.164720951 container init c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:59:22 localhost podman[318347]: 2026-02-23 09:59:22.430681967 +0000 UTC m=+0.170559149 container start c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 23 04:59:22 localhost dnsmasq[318382]: started, version 2.85 cachesize 150 Feb 23 04:59:22 localhost dnsmasq[318382]: DNS service limited to local subnets Feb 23 04:59:22 localhost dnsmasq[318382]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:59:22 localhost dnsmasq[318382]: warning: no upstream servers configured Feb 23 04:59:22 localhost dnsmasq-dhcp[318382]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:59:22 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 0 addresses Feb 23 04:59:22 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:22 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 04:59:22 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 04:59:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 04:59:22 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:59:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 04:59:22 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 88c95c99-a622-4e0c-9995-4860918b8d40 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:59:22 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 88c95c99-a622-4e0c-9995-4860918b8d40 (Updating node-proxy deployment (+3 -> 3)) Feb 23 04:59:22 localhost ceph-mgr[285904]: [progress INFO root] Completed event 88c95c99-a622-4e0c-9995-4860918b8d40 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 04:59:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 04:59:22 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 04:59:22 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:22.624 263679 INFO neutron.agent.dhcp.agent [None req-969d733a-8456-4fba-9b43-10e59473a968 - - - - - -] DHCP configuration for ports {'5c2b6a1f-0962-436b-948b-59555cf7a758'} is completed#033[00m Feb 23 04:59:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 145 MiB data, 835 MiB used, 41 GiB / 42 GiB avail; 107 KiB/s rd, 7.6 KiB/s wr, 151 op/s Feb 23 04:59:23 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:23.259 2 INFO neutron.agent.securitygroups_rpc [req-d0cb4007-c7bf-4f23-9a12-bffb679ca45d req-9d22fdae-ec49-499d-83f5-abbd29e1424d f49fd8b6937445efab40892d03b375d7 0421515e6bb54dea8db3ed218999e195 - - default default] Security group member updated ['c46df023-9a3e-4c54-a0bb-44b675220af4']#033[00m Feb 23 04:59:23 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:23.312 263679 INFO neutron.agent.linux.ip_lib [None req-7836c47f-6564-46b6-b20b-765e73d8bc54 - - - - - -] Device tapd5d59ef1-4c cannot be used as it has no MAC address#033[00m Feb 23 04:59:23 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 04:59:23 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:59:23 localhost nova_compute[280321]: 2026-02-23 09:59:23.382 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:23 localhost kernel: device tapd5d59ef1-4c entered promiscuous mode Feb 23 04:59:23 localhost NetworkManager[5987]: [1771840763.3919] manager: (tapd5d59ef1-4c): new Generic device (/org/freedesktop/NetworkManager/Devices/53) Feb 23 04:59:23 localhost systemd-udevd[318227]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:59:23 localhost nova_compute[280321]: 2026-02-23 09:59:23.392 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:23 localhost ovn_controller[155966]: 2026-02-23T09:59:23Z|00282|binding|INFO|Claiming lport d5d59ef1-4c90-48c3-ad6a-ce85e8cec53f for this chassis. Feb 23 04:59:23 localhost ovn_controller[155966]: 2026-02-23T09:59:23Z|00283|binding|INFO|d5d59ef1-4c90-48c3-ad6a-ce85e8cec53f: Claiming unknown Feb 23 04:59:23 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:23.396 2 INFO neutron.agent.securitygroups_rpc [None req-669db256-0e1b-436d-bf03-d63867ff4f10 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:23 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:23.409 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-78d7d8f3-7640-449e-aad8-f8bfcbb5961c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78d7d8f3-7640-449e-aad8-f8bfcbb5961c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba877496ef70493683c3a5d3962fd41b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92e880b0-a8f3-47c1-a2b9-7c522c761379, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5d59ef1-4c90-48c3-ad6a-ce85e8cec53f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:23 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:23.412 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d5d59ef1-4c90-48c3-ad6a-ce85e8cec53f in datapath 78d7d8f3-7640-449e-aad8-f8bfcbb5961c bound to our chassis#033[00m Feb 23 04:59:23 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:23.416 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port c6483827-20ed-4d72-bc8e-19c70205d64f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 04:59:23 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:23.416 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 78d7d8f3-7640-449e-aad8-f8bfcbb5961c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:23 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:23.417 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[a4730fed-811a-4f87-856d-616e4c0df03e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:23 localhost ovn_controller[155966]: 2026-02-23T09:59:23Z|00284|binding|INFO|Setting lport d5d59ef1-4c90-48c3-ad6a-ce85e8cec53f ovn-installed in OVS Feb 23 04:59:23 localhost ovn_controller[155966]: 2026-02-23T09:59:23Z|00285|binding|INFO|Setting lport d5d59ef1-4c90-48c3-ad6a-ce85e8cec53f up in Southbound Feb 23 04:59:23 localhost nova_compute[280321]: 2026-02-23 09:59:23.446 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:23 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:23.455 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:59:22Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d6583ad2-64c9-4a49-b392-0e7fbd180f17, ip_allocation=immediate, mac_address=fa:16:3e:81:41:f6, name=tempest-AllowedAddressPairTestJSON-1942886612, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:59:18Z, description=, dns_domain=, id=634b86db-4147-42c3-b055-72dbdf8593f3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-436015876, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60630, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2391, status=ACTIVE, subnets=['3700ce4e-9186-4f0b-822d-ceb3fb5e9180'], tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:20Z, vlan_transparent=None, network_id=634b86db-4147-42c3-b055-72dbdf8593f3, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ee3954e0-cd09-4323-ae3c-c3f1e63159bd'], standard_attr_id=2427, status=DOWN, tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:22Z on network 634b86db-4147-42c3-b055-72dbdf8593f3#033[00m Feb 23 04:59:23 localhost nova_compute[280321]: 2026-02-23 09:59:23.486 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:23 localhost nova_compute[280321]: 2026-02-23 09:59:23.514 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:23 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 1 addresses Feb 23 04:59:23 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:23 localhost podman[318439]: 2026-02-23 09:59:23.669634969 +0000 UTC m=+0.063038225 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:59:23 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:23 localhost nova_compute[280321]: 2026-02-23 09:59:23.911 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.036 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:24 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:24.037 263679 INFO neutron.agent.dhcp.agent [None req-3b833763-f5d9-4f8d-9eef-6d9f9ca972ea - - - - - -] DHCP configuration for ports {'d6583ad2-64c9-4a49-b392-0e7fbd180f17'} is completed#033[00m Feb 23 04:59:24 localhost podman[318502]: Feb 23 04:59:24 localhost podman[318502]: 2026-02-23 09:59:24.460973334 +0000 UTC m=+0.090009139 container create f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78d7d8f3-7640-449e-aad8-f8bfcbb5961c, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:59:24 localhost systemd[1]: Started libpod-conmon-f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817.scope. Feb 23 04:59:24 localhost podman[318502]: 2026-02-23 09:59:24.418332353 +0000 UTC m=+0.047368218 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:59:24 localhost systemd[1]: Started libcrun container. Feb 23 04:59:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5139cbb9da269422310239bb4b5543eb28661b00564296292bfa2e92fa0bc869/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:59:24 localhost podman[318502]: 2026-02-23 09:59:24.550189979 +0000 UTC m=+0.179225814 container init f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78d7d8f3-7640-449e-aad8-f8bfcbb5961c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 04:59:24 localhost podman[318502]: 2026-02-23 09:59:24.559122802 +0000 UTC m=+0.188158637 container start f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78d7d8f3-7640-449e-aad8-f8bfcbb5961c, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:59:24 localhost dnsmasq[318520]: started, version 2.85 cachesize 150 Feb 23 04:59:24 localhost dnsmasq[318520]: DNS service limited to local subnets Feb 23 04:59:24 localhost dnsmasq[318520]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:59:24 localhost dnsmasq[318520]: warning: no upstream servers configured Feb 23 04:59:24 localhost dnsmasq-dhcp[318520]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:59:24 localhost dnsmasq[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/addn_hosts - 0 addresses Feb 23 04:59:24 localhost dnsmasq-dhcp[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/host Feb 23 04:59:24 localhost dnsmasq-dhcp[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/opts Feb 23 04:59:24 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e153 e153: 6 total, 6 up, 6 in Feb 23 04:59:24 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:24.840 263679 INFO neutron.agent.dhcp.agent [None req-c7f9c335-da86-4c60-9cb5-c41ba3b2dc58 - - - - - -] DHCP configuration for ports {'0fa18b5f-e5e3-4874-8bc3-7064f183be25'} is completed#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.913 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.914 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.933 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.934 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.935 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.935 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 04:59:24 localhost nova_compute[280321]: 2026-02-23 09:59:24.936 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:59:24 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:24.947 2 INFO neutron.agent.securitygroups_rpc [None req-f9eeb769-9fe9-4ea4-8c2c-b3370b8aaa5a 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:25.008 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:59:24Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=53bdb895-563f-4496-b9ab-d080dab3b075, ip_allocation=immediate, mac_address=fa:16:3e:a4:50:cf, name=tempest-AllowedAddressPairTestJSON-1079864460, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:59:18Z, description=, dns_domain=, id=634b86db-4147-42c3-b055-72dbdf8593f3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-436015876, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60630, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2391, status=ACTIVE, subnets=['3700ce4e-9186-4f0b-822d-ceb3fb5e9180'], tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:20Z, vlan_transparent=None, network_id=634b86db-4147-42c3-b055-72dbdf8593f3, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ee3954e0-cd09-4323-ae3c-c3f1e63159bd'], standard_attr_id=2433, status=DOWN, tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:24Z on network 634b86db-4147-42c3-b055-72dbdf8593f3#033[00m Feb 23 04:59:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 145 MiB data, 835 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 3.5 KiB/s wr, 80 op/s Feb 23 04:59:25 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 2 addresses Feb 23 04:59:25 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:25 localhost podman[318558]: 2026-02-23 09:59:25.255207617 +0000 UTC m=+0.070563695 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 23 04:59:25 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:59:25 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1603226869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:59:25 localhost nova_compute[280321]: 2026-02-23 09:59:25.419 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:59:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:25.512 263679 INFO neutron.agent.dhcp.agent [None req-a33eea64-2395-4f27-9aa7-a50edd604371 - - - - - -] DHCP configuration for ports {'53bdb895-563f-4496-b9ab-d080dab3b075'} is completed#033[00m Feb 23 04:59:25 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 04:59:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 04:59:25 localhost nova_compute[280321]: 2026-02-23 09:59:25.623 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 04:59:25 localhost nova_compute[280321]: 2026-02-23 09:59:25.625 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11656MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 04:59:25 localhost nova_compute[280321]: 2026-02-23 09:59:25.625 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:59:25 localhost nova_compute[280321]: 2026-02-23 09:59:25.626 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:59:25 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:25.779 2 INFO neutron.agent.securitygroups_rpc [None req-451b3e02-3a26-4970-9980-9e73ed6341a9 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:25 localhost nova_compute[280321]: 2026-02-23 09:59:25.874 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 04:59:25 localhost nova_compute[280321]: 2026-02-23 09:59:25.874 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 04:59:25 localhost nova_compute[280321]: 2026-02-23 09:59:25.938 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing inventories for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 23 04:59:26 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 1 addresses Feb 23 04:59:26 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:26 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:26 localhost podman[318598]: 2026-02-23 09:59:26.036246788 +0000 UTC m=+0.069828954 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.085 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating ProviderTree inventory for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.086 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.101 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing aggregate associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 23 04:59:26 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:26.112 2 INFO neutron.agent.securitygroups_rpc [None req-69ced21c-6b6a-47d6-87bf-2537c99ace20 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.132 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing trait associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, traits: HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SHA,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NODE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_ABM,HW_CPU_X86_AESNI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.158 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 04:59:26 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:26.210 2 INFO neutron.agent.securitygroups_rpc [None req-12deaede-70c5-4b22-963c-9ca013b19b91 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:26 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:26.261 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:59:25Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=01d2d38e-6261-4c33-8faf-7500f7d5c494, ip_allocation=immediate, mac_address=fa:16:3e:f4:db:c9, name=tempest-AllowedAddressPairTestJSON-921258176, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:59:18Z, description=, dns_domain=, id=634b86db-4147-42c3-b055-72dbdf8593f3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-436015876, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60630, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2391, status=ACTIVE, subnets=['3700ce4e-9186-4f0b-822d-ceb3fb5e9180'], tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:20Z, vlan_transparent=None, network_id=634b86db-4147-42c3-b055-72dbdf8593f3, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ee3954e0-cd09-4323-ae3c-c3f1e63159bd'], standard_attr_id=2446, status=DOWN, tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:26Z on network 634b86db-4147-42c3-b055-72dbdf8593f3#033[00m Feb 23 04:59:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 04:59:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:59:26 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2005301742' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:59:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:59:26 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2005301742' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:59:26 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 2 addresses Feb 23 04:59:26 localhost podman[318654]: 2026-02-23 09:59:26.476274664 +0000 UTC m=+0.057887788 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 04:59:26 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:26 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 04:59:26 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3917936370' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.611 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.619 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.640 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.643 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.643 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.644 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:26 localhost nova_compute[280321]: 2026-02-23 09:59:26.644 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 23 04:59:26 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:26.694 263679 INFO neutron.agent.dhcp.agent [None req-b329616b-e63c-4c3d-a954-813f96334b51 - - - - - -] DHCP configuration for ports {'01d2d38e-6261-4c33-8faf-7500f7d5c494'} is completed#033[00m Feb 23 04:59:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 151 KiB/s rd, 2.7 MiB/s wr, 212 op/s Feb 23 04:59:27 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:27.420 2 INFO neutron.agent.securitygroups_rpc [None req-3c7a21a7-3db2-45cc-9718-507bcfbefba1 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:27 localhost nova_compute[280321]: 2026-02-23 09:59:27.645 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:27 localhost nova_compute[280321]: 2026-02-23 09:59:27.646 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:27 localhost nova_compute[280321]: 2026-02-23 09:59:27.646 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:27 localhost nova_compute[280321]: 2026-02-23 09:59:27.646 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:27 localhost nova_compute[280321]: 2026-02-23 09:59:27.647 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 04:59:27 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:27.684 2 INFO neutron.agent.securitygroups_rpc [None req-d9a86df9-d280-468f-9907-620800eed5df 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:28 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 1 addresses Feb 23 04:59:28 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:28 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:28 localhost podman[318692]: 2026-02-23 09:59:28.049618099 +0000 UTC m=+0.062037606 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216) Feb 23 04:59:28 localhost nova_compute[280321]: 2026-02-23 09:59:28.384 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:28 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:28.811 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:59:27Z, description=, device_id=26950c9f-5235-41ec-a66d-9dc258e0fa3c, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8ecbf43b-a4de-4690-a8c6-0f90ea097e9f, ip_allocation=immediate, mac_address=fa:16:3e:32:ca:13, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:59:19Z, description=, dns_domain=, id=78d7d8f3-7640-449e-aad8-f8bfcbb5961c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1166906079-network, port_security_enabled=True, project_id=ba877496ef70493683c3a5d3962fd41b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18553, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2408, status=ACTIVE, subnets=['660b2f01-ba5d-49f6-a74f-c579fc19acd3'], tags=[], tenant_id=ba877496ef70493683c3a5d3962fd41b, updated_at=2026-02-23T09:59:21Z, vlan_transparent=None, network_id=78d7d8f3-7640-449e-aad8-f8bfcbb5961c, port_security_enabled=False, project_id=ba877496ef70493683c3a5d3962fd41b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2455, status=DOWN, tags=[], tenant_id=ba877496ef70493683c3a5d3962fd41b, updated_at=2026-02-23T09:59:27Z on network 78d7d8f3-7640-449e-aad8-f8bfcbb5961c#033[00m Feb 23 04:59:28 localhost nova_compute[280321]: 2026-02-23 09:59:28.889 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:28 localhost nova_compute[280321]: 2026-02-23 09:59:28.919 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:29 localhost nova_compute[280321]: 2026-02-23 09:59:29.058 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:29 localhost dnsmasq[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/addn_hosts - 1 addresses Feb 23 04:59:29 localhost podman[318730]: 2026-02-23 09:59:29.090502323 +0000 UTC m=+0.074826085 container kill f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78d7d8f3-7640-449e-aad8-f8bfcbb5961c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:59:29 localhost dnsmasq-dhcp[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/host Feb 23 04:59:29 localhost dnsmasq-dhcp[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/opts Feb 23 04:59:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 122 KiB/s rd, 2.7 MiB/s wr, 172 op/s Feb 23 04:59:29 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:29.348 263679 INFO neutron.agent.dhcp.agent [None req-5fb32a67-4cca-4f0c-8a72-27ac8115f45d - - - - - -] DHCP configuration for ports {'8ecbf43b-a4de-4690-a8c6-0f90ea097e9f'} is completed#033[00m Feb 23 04:59:29 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:29.758 2 INFO neutron.agent.securitygroups_rpc [None req-79540abb-7fc2-40cd-b9ab-4b003306a8d3 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:29 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:29.804 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:59:29Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=154fec3a-184d-4579-b914-9c82a8b8016e, ip_allocation=immediate, mac_address=fa:16:3e:ad:38:50, name=tempest-AllowedAddressPairTestJSON-826444690, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:59:18Z, description=, dns_domain=, id=634b86db-4147-42c3-b055-72dbdf8593f3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-436015876, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60630, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2391, status=ACTIVE, subnets=['3700ce4e-9186-4f0b-822d-ceb3fb5e9180'], tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:20Z, vlan_transparent=None, network_id=634b86db-4147-42c3-b055-72dbdf8593f3, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ee3954e0-cd09-4323-ae3c-c3f1e63159bd'], standard_attr_id=2463, status=DOWN, tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:29Z on network 634b86db-4147-42c3-b055-72dbdf8593f3#033[00m Feb 23 04:59:30 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 2 addresses Feb 23 04:59:30 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:30 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:30 localhost podman[318769]: 2026-02-23 09:59:30.02469593 +0000 UTC m=+0.065156420 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:59:30 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:30.294 263679 INFO neutron.agent.dhcp.agent [None req-523411e8-d679-447e-878a-7ac7e3562129 - - - - - -] DHCP configuration for ports {'154fec3a-184d-4579-b914-9c82a8b8016e'} is completed#033[00m Feb 23 04:59:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:30 localhost nova_compute[280321]: 2026-02-23 09:59:30.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 2.3 MiB/s wr, 122 op/s Feb 23 04:59:31 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:31.264 2 INFO neutron.agent.securitygroups_rpc [None req-5bebbbf2-890e-4b6f-92fc-689753c8df12 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:31 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:31.364 2 INFO neutron.agent.securitygroups_rpc [None req-6d65bfbe-b42e-4b67-8b3f-02498c056686 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:31 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 1 addresses Feb 23 04:59:31 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:31 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:31 localhost podman[318806]: 2026-02-23 09:59:31.49901464 +0000 UTC m=+0.063291423 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:59:31 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:31.937 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:59:27Z, description=, device_id=26950c9f-5235-41ec-a66d-9dc258e0fa3c, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8ecbf43b-a4de-4690-a8c6-0f90ea097e9f, ip_allocation=immediate, mac_address=fa:16:3e:32:ca:13, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:59:19Z, description=, dns_domain=, id=78d7d8f3-7640-449e-aad8-f8bfcbb5961c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1166906079-network, port_security_enabled=True, project_id=ba877496ef70493683c3a5d3962fd41b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18553, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2408, status=ACTIVE, subnets=['660b2f01-ba5d-49f6-a74f-c579fc19acd3'], tags=[], tenant_id=ba877496ef70493683c3a5d3962fd41b, updated_at=2026-02-23T09:59:21Z, vlan_transparent=None, network_id=78d7d8f3-7640-449e-aad8-f8bfcbb5961c, port_security_enabled=False, project_id=ba877496ef70493683c3a5d3962fd41b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2455, status=DOWN, tags=[], tenant_id=ba877496ef70493683c3a5d3962fd41b, updated_at=2026-02-23T09:59:27Z on network 78d7d8f3-7640-449e-aad8-f8bfcbb5961c#033[00m Feb 23 04:59:32 localhost openstack_network_exporter[243519]: ERROR 09:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 04:59:32 localhost openstack_network_exporter[243519]: Feb 23 04:59:32 localhost openstack_network_exporter[243519]: ERROR 09:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 04:59:32 localhost openstack_network_exporter[243519]: Feb 23 04:59:32 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e154 e154: 6 total, 6 up, 6 in Feb 23 04:59:32 localhost dnsmasq[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/addn_hosts - 1 addresses Feb 23 04:59:32 localhost podman[318843]: 2026-02-23 09:59:32.194849309 +0000 UTC m=+0.067975067 container kill f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78d7d8f3-7640-449e-aad8-f8bfcbb5961c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:59:32 localhost dnsmasq-dhcp[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/host Feb 23 04:59:32 localhost dnsmasq-dhcp[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/opts Feb 23 04:59:32 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:32.241 2 INFO neutron.agent.securitygroups_rpc [None req-b9f58e08-6424-4ab4-86b8-227dc5c4bdfc 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:32 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:32.260 2 INFO neutron.agent.securitygroups_rpc [None req-9acdbf04-3277-47bc-8ea9-7f279ed4a9a4 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:32 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:32.301 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:59:31Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4fe78371-5931-407f-9630-827c138f1041, ip_allocation=immediate, mac_address=fa:16:3e:7a:e2:a3, name=tempest-AllowedAddressPairTestJSON-425042819, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:59:18Z, description=, dns_domain=, id=634b86db-4147-42c3-b055-72dbdf8593f3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-436015876, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60630, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2391, status=ACTIVE, subnets=['3700ce4e-9186-4f0b-822d-ceb3fb5e9180'], tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:20Z, vlan_transparent=None, network_id=634b86db-4147-42c3-b055-72dbdf8593f3, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ee3954e0-cd09-4323-ae3c-c3f1e63159bd'], standard_attr_id=2476, status=DOWN, tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:32Z on network 634b86db-4147-42c3-b055-72dbdf8593f3#033[00m Feb 23 04:59:32 localhost nova_compute[280321]: 2026-02-23 09:59:32.359 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:32.359 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:32 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:32.361 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 04:59:32 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:32.576 263679 INFO neutron.agent.dhcp.agent [None req-a6d3f6fc-b664-49f3-9a9a-9a4de5f7add3 - - - - - -] DHCP configuration for ports {'8ecbf43b-a4de-4690-a8c6-0f90ea097e9f'} is completed#033[00m Feb 23 04:59:32 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 2 addresses Feb 23 04:59:32 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:32 localhost systemd[1]: tmp-crun.atmN8B.mount: Deactivated successfully. Feb 23 04:59:32 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:32 localhost podman[318879]: 2026-02-23 09:59:32.759554283 +0000 UTC m=+0.076022263 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:59:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:33.029 263679 INFO neutron.agent.dhcp.agent [None req-49e7e675-97ef-42a5-960b-f06721cf2ab2 - - - - - -] DHCP configuration for ports {'4fe78371-5931-407f-9630-827c138f1041'} is completed#033[00m Feb 23 04:59:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 2.5 MiB/s wr, 154 op/s Feb 23 04:59:33 localhost nova_compute[280321]: 2026-02-23 09:59:33.414 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:33 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:33.786 2 INFO neutron.agent.securitygroups_rpc [None req-fd6425dd-2055-4cbb-a840-beb509d9498c 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:33.886 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T09:59:32Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5d9fe5fb-1503-479b-a7cf-5a3c11f6bb57, ip_allocation=immediate, mac_address=fa:16:3e:4e:f3:24, name=tempest-AllowedAddressPairTestJSON-1447180909, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T09:59:18Z, description=, dns_domain=, id=634b86db-4147-42c3-b055-72dbdf8593f3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-436015876, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60630, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2391, status=ACTIVE, subnets=['3700ce4e-9186-4f0b-822d-ceb3fb5e9180'], tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:20Z, vlan_transparent=None, network_id=634b86db-4147-42c3-b055-72dbdf8593f3, port_security_enabled=True, project_id=582130ae966043d38e47148509dbe266, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ee3954e0-cd09-4323-ae3c-c3f1e63159bd'], standard_attr_id=2477, status=DOWN, tags=[], tenant_id=582130ae966043d38e47148509dbe266, updated_at=2026-02-23T09:59:33Z on network 634b86db-4147-42c3-b055-72dbdf8593f3#033[00m Feb 23 04:59:33 localhost ovn_controller[155966]: 2026-02-23T09:59:33Z|00286|ovn_bfd|INFO|Enabled BFD on interface ovn-5b0126-0 Feb 23 04:59:33 localhost ovn_controller[155966]: 2026-02-23T09:59:33Z|00287|ovn_bfd|INFO|Enabled BFD on interface ovn-585d62-0 Feb 23 04:59:33 localhost ovn_controller[155966]: 2026-02-23T09:59:33Z|00288|ovn_bfd|INFO|Enabled BFD on interface ovn-b9c72d-0 Feb 23 04:59:33 localhost nova_compute[280321]: 2026-02-23 09:59:33.908 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:33 localhost nova_compute[280321]: 2026-02-23 09:59:33.926 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 04:59:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 04:59:33 localhost nova_compute[280321]: 2026-02-23 09:59:33.935 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:33 localhost nova_compute[280321]: 2026-02-23 09:59:33.946 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:33 localhost nova_compute[280321]: 2026-02-23 09:59:33.951 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:33 localhost nova_compute[280321]: 2026-02-23 09:59:33.974 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:34 localhost podman[318904]: 2026-02-23 09:59:34.022975102 +0000 UTC m=+0.081669805 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 04:59:34 localhost nova_compute[280321]: 2026-02-23 09:59:34.062 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:34 localhost podman[318904]: 2026-02-23 09:59:34.064981285 +0000 UTC m=+0.123675968 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z) Feb 23 04:59:34 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 04:59:34 localhost podman[318903]: 2026-02-23 09:59:34.137464938 +0000 UTC m=+0.194734158 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:59:34 localhost podman[318903]: 2026-02-23 09:59:34.145497834 +0000 UTC m=+0.202767044 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 04:59:34 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 04:59:34 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 3 addresses Feb 23 04:59:34 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:34 localhost podman[318949]: 2026-02-23 09:59:34.175041795 +0000 UTC m=+0.101782988 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:59:34 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e155 e155: 6 total, 6 up, 6 in Feb 23 04:59:34 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:34.467 263679 INFO neutron.agent.dhcp.agent [None req-41400c2e-c291-441a-84e4-9f832e5f535c - - - - - -] DHCP configuration for ports {'5d9fe5fb-1503-479b-a7cf-5a3c11f6bb57'} is completed#033[00m Feb 23 04:59:34 localhost nova_compute[280321]: 2026-02-23 09:59:34.881 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:34 localhost nova_compute[280321]: 2026-02-23 09:59:34.903 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:59:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:59:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:59:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:59:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 04:59:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 04:59:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 22 KiB/s wr, 33 op/s Feb 23 04:59:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:35 localhost nova_compute[280321]: 2026-02-23 09:59:35.691 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:35 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:35.954 2 INFO neutron.agent.securitygroups_rpc [None req-98b565c4-bd61-4452-bccd-bd5e5d274484 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:36 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:36.038 2 INFO neutron.agent.securitygroups_rpc [None req-1e0c6ce2-0fa1-4c95-beb4-0c824ad3b485 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:36 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 2 addresses Feb 23 04:59:36 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:36 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:36 localhost systemd[1]: tmp-crun.r1pIZz.mount: Deactivated successfully. Feb 23 04:59:36 localhost podman[319000]: 2026-02-23 09:59:36.187582082 +0000 UTC m=+0.052941908 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 04:59:36 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e156 e156: 6 total, 6 up, 6 in Feb 23 04:59:36 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:36.628 2 INFO neutron.agent.securitygroups_rpc [None req-8ded7049-1595-4d50-af9e-057a8bcc7a90 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:36 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:36.810 2 INFO neutron.agent.securitygroups_rpc [None req-0d2af92f-c1c3-481d-a685-438d421cfdf3 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:37 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 1 addresses Feb 23 04:59:37 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:37 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:37 localhost podman[319038]: 2026-02-23 09:59:37.041083524 +0000 UTC m=+0.064870992 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.124057) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840777124118, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2363, "num_deletes": 263, "total_data_size": 3203546, "memory_usage": 3253328, "flush_reason": "Manual Compaction"} Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840777135689, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 2069328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23201, "largest_seqno": 25558, "table_properties": {"data_size": 2060472, "index_size": 5429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19826, "raw_average_key_size": 20, "raw_value_size": 2041984, "raw_average_value_size": 2158, "num_data_blocks": 236, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840638, "oldest_key_time": 1771840638, "file_creation_time": 1771840777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 11686 microseconds, and 6641 cpu microseconds. Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.135741) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 2069328 bytes OK Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.135767) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.139692) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.139730) EVENT_LOG_v1 {"time_micros": 1771840777139719, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.139760) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3192814, prev total WAL file size 3193138, number of live WAL files 2. Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.140975) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303139' seq:72057594037927935, type:22 .. '6C6F676D0034323731' seq:0, type:0; will stop at (end) Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(2020KB)], [36(16MB)] Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840777141066, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 18874712, "oldest_snapshot_seqno": -1} Feb 23 04:59:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 33 KiB/s wr, 197 op/s Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 12994 keys, 18375778 bytes, temperature: kUnknown Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840777246745, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 18375778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18298301, "index_size": 43904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32517, "raw_key_size": 347070, "raw_average_key_size": 26, "raw_value_size": 18073749, "raw_average_value_size": 1390, "num_data_blocks": 1680, "num_entries": 12994, "num_filter_entries": 12994, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.247087) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 18375778 bytes Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.248931) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.5 rd, 173.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 16.0 +0.0 blob) out(17.5 +0.0 blob), read-write-amplify(18.0) write-amplify(8.9) OK, records in: 13535, records dropped: 541 output_compression: NoCompression Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.248961) EVENT_LOG_v1 {"time_micros": 1771840777248946, "job": 20, "event": "compaction_finished", "compaction_time_micros": 105755, "compaction_time_cpu_micros": 58392, "output_level": 6, "num_output_files": 1, "total_output_size": 18375778, "num_input_records": 13535, "num_output_records": 12994, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840777249382, "job": 20, "event": "table_file_deletion", "file_number": 38} Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840777252008, "job": 20, "event": "table_file_deletion", "file_number": 36} Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.140838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.252093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.252101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.252104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.252107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:37 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:37.252111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e157 e157: 6 total, 6 up, 6 in Feb 23 04:59:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 04:59:37 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:37.957 2 INFO neutron.agent.securitygroups_rpc [None req-4c34a092-1bee-4730-b083-a159f9af8bdd 446d22b7a9534ee1a139ffd6bef47f3c 582130ae966043d38e47148509dbe266 - - default default] Security group member updated ['ee3954e0-cd09-4323-ae3c-c3f1e63159bd']#033[00m Feb 23 04:59:38 localhost podman[319060]: 2026-02-23 09:59:38.024821895 +0000 UTC m=+0.097723505 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true) Feb 23 04:59:38 localhost podman[319060]: 2026-02-23 09:59:38.092041328 +0000 UTC m=+0.164942988 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 04:59:38 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 04:59:38 localhost dnsmasq[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/addn_hosts - 0 addresses Feb 23 04:59:38 localhost podman[319101]: 2026-02-23 09:59:38.242066018 +0000 UTC m=+0.050982917 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:59:38 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/host Feb 23 04:59:38 localhost dnsmasq-dhcp[318382]: read /var/lib/neutron/dhcp/634b86db-4147-42c3-b055-72dbdf8593f3/opts Feb 23 04:59:38 localhost nova_compute[280321]: 2026-02-23 09:59:38.416 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:38 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e158 e158: 6 total, 6 up, 6 in Feb 23 04:59:38 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:38.599 2 INFO neutron.agent.securitygroups_rpc [None req-32c78c98-6e6f-4ee3-a8c4-559f4243f6cc 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:38 localhost dnsmasq[318382]: exiting on receipt of SIGTERM Feb 23 04:59:38 localhost podman[319138]: 2026-02-23 09:59:38.778642294 +0000 UTC m=+0.065715509 container kill c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:59:38 localhost systemd[1]: libpod-c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760.scope: Deactivated successfully. Feb 23 04:59:38 localhost podman[319152]: 2026-02-23 09:59:38.863181755 +0000 UTC m=+0.063376187 container died c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:59:38 localhost podman[319152]: 2026-02-23 09:59:38.897804462 +0000 UTC m=+0.097998864 container cleanup c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0) Feb 23 04:59:38 localhost systemd[1]: libpod-conmon-c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760.scope: Deactivated successfully. Feb 23 04:59:38 localhost podman[319154]: 2026-02-23 09:59:38.943517949 +0000 UTC m=+0.133323343 container remove c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-634b86db-4147-42c3-b055-72dbdf8593f3, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 04:59:38 localhost nova_compute[280321]: 2026-02-23 09:59:38.954 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:38 localhost ovn_controller[155966]: 2026-02-23T09:59:38Z|00289|binding|INFO|Releasing lport 96b084fa-1207-4367-8e2b-dcd92497359e from this chassis (sb_readonly=0) Feb 23 04:59:38 localhost ovn_controller[155966]: 2026-02-23T09:59:38Z|00290|binding|INFO|Setting lport 96b084fa-1207-4367-8e2b-dcd92497359e down in Southbound Feb 23 04:59:38 localhost kernel: device tap96b084fa-12 left promiscuous mode Feb 23 04:59:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:38.964 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-634b86db-4147-42c3-b055-72dbdf8593f3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-634b86db-4147-42c3-b055-72dbdf8593f3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '582130ae966043d38e47148509dbe266', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88d00d90-aa56-4423-ad85-ec995a1f88e5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=96b084fa-1207-4367-8e2b-dcd92497359e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:38.967 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 96b084fa-1207-4367-8e2b-dcd92497359e in datapath 634b86db-4147-42c3-b055-72dbdf8593f3 unbound from our chassis#033[00m Feb 23 04:59:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:38.971 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 634b86db-4147-42c3-b055-72dbdf8593f3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:38 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:38.972 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb1105f-6e86-4738-9d20-4397ca87398a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:38 localhost nova_compute[280321]: 2026-02-23 09:59:38.975 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:38 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:38.976 2 INFO neutron.agent.securitygroups_rpc [None req-6b31b9c1-f7cb-4728-a77a-3dbb7699a58d 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:39 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:39.004 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:39 localhost systemd[1]: var-lib-containers-storage-overlay-0652886642e9b8976254714fbdcd59246d8eae9055800e34d39d7ccd9b327508-merged.mount: Deactivated successfully. Feb 23 04:59:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c87cd8373812d7697777eb441c76c19775e8c8287c0b806490795810a4e5a760-userdata-shm.mount: Deactivated successfully. Feb 23 04:59:39 localhost systemd[1]: run-netns-qdhcp\x2d634b86db\x2d4147\x2d42c3\x2db055\x2d72dbdf8593f3.mount: Deactivated successfully. Feb 23 04:59:39 localhost nova_compute[280321]: 2026-02-23 09:59:39.064 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 4.7 MiB/s rd, 4.5 KiB/s wr, 197 op/s Feb 23 04:59:39 localhost sshd[319183]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:59:39 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:39.389 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:39 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e159 e159: 6 total, 6 up, 6 in Feb 23 04:59:39 localhost nova_compute[280321]: 2026-02-23 09:59:39.832 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:39 localhost nova_compute[280321]: 2026-02-23 09:59:39.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 04:59:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:41 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:41.036 263679 INFO neutron.agent.linux.ip_lib [None req-68530568-80cf-437b-966b-2bf0990fee46 - - - - - -] Device tap3508a2da-25 cannot be used as it has no MAC address#033[00m Feb 23 04:59:41 localhost nova_compute[280321]: 2026-02-23 09:59:41.096 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:41 localhost kernel: device tap3508a2da-25 entered promiscuous mode Feb 23 04:59:41 localhost NetworkManager[5987]: [1771840781.1060] manager: (tap3508a2da-25): new Generic device (/org/freedesktop/NetworkManager/Devices/54) Feb 23 04:59:41 localhost nova_compute[280321]: 2026-02-23 09:59:41.106 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:41 localhost ovn_controller[155966]: 2026-02-23T09:59:41Z|00291|binding|INFO|Claiming lport 3508a2da-2572-4507-affc-3ad293e1e24b for this chassis. Feb 23 04:59:41 localhost ovn_controller[155966]: 2026-02-23T09:59:41Z|00292|binding|INFO|3508a2da-2572-4507-affc-3ad293e1e24b: Claiming unknown Feb 23 04:59:41 localhost systemd-udevd[319195]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:59:41 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:41.118 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-6e4091a1-2383-4121-a743-ef58bee4785d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e4091a1-2383-4121-a743-ef58bee4785d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20ee87502ddb4dde82419e1f4302f590', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=407aeab9-650a-4861-88f9-b68d79a26eca, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3508a2da-2572-4507-affc-3ad293e1e24b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:41 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:41.120 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 3508a2da-2572-4507-affc-3ad293e1e24b in datapath 6e4091a1-2383-4121-a743-ef58bee4785d bound to our chassis#033[00m Feb 23 04:59:41 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:41.121 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6e4091a1-2383-4121-a743-ef58bee4785d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:59:41 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:41.122 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d4dead49-6537-431c-8d06-a6a12dca4c82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:41 localhost journal[229268]: ethtool ioctl error on tap3508a2da-25: No such device Feb 23 04:59:41 localhost journal[229268]: ethtool ioctl error on tap3508a2da-25: No such device Feb 23 04:59:41 localhost ovn_controller[155966]: 2026-02-23T09:59:41Z|00293|binding|INFO|Setting lport 3508a2da-2572-4507-affc-3ad293e1e24b ovn-installed in OVS Feb 23 04:59:41 localhost ovn_controller[155966]: 2026-02-23T09:59:41Z|00294|binding|INFO|Setting lport 3508a2da-2572-4507-affc-3ad293e1e24b up in Southbound Feb 23 04:59:41 localhost nova_compute[280321]: 2026-02-23 09:59:41.155 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:41 localhost journal[229268]: ethtool ioctl error on tap3508a2da-25: No such device Feb 23 04:59:41 localhost journal[229268]: ethtool ioctl error on tap3508a2da-25: No such device Feb 23 04:59:41 localhost journal[229268]: ethtool ioctl error on tap3508a2da-25: No such device Feb 23 04:59:41 localhost journal[229268]: ethtool ioctl error on tap3508a2da-25: No such device Feb 23 04:59:41 localhost journal[229268]: ethtool ioctl error on tap3508a2da-25: No such device Feb 23 04:59:41 localhost journal[229268]: ethtool ioctl error on tap3508a2da-25: No such device Feb 23 04:59:41 localhost nova_compute[280321]: 2026-02-23 09:59:41.186 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:41 localhost nova_compute[280321]: 2026-02-23 09:59:41.211 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 216 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 5.5 MiB/s rd, 5.0 MiB/s wr, 341 op/s Feb 23 04:59:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e160 e160: 6 total, 6 up, 6 in Feb 23 04:59:42 localhost podman[319266]: Feb 23 04:59:42 localhost podman[319266]: 2026-02-23 09:59:42.075372906 +0000 UTC m=+0.082575423 container create 590ca35cd7a93ff08eef66a3d5d61b20fe574055fad4fd5e819f0995c0eaf782 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e4091a1-2383-4121-a743-ef58bee4785d, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:59:42 localhost systemd[1]: Started libpod-conmon-590ca35cd7a93ff08eef66a3d5d61b20fe574055fad4fd5e819f0995c0eaf782.scope. Feb 23 04:59:42 localhost podman[319266]: 2026-02-23 09:59:42.04014744 +0000 UTC m=+0.047349947 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:59:42 localhost systemd[1]: Started libcrun container. Feb 23 04:59:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8070327831c45b90c990557110ffb97eae23ef1b414d417a67dffd7ea7f5ec10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:59:42 localhost podman[319266]: 2026-02-23 09:59:42.161574547 +0000 UTC m=+0.168777044 container init 590ca35cd7a93ff08eef66a3d5d61b20fe574055fad4fd5e819f0995c0eaf782 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e4091a1-2383-4121-a743-ef58bee4785d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 04:59:42 localhost podman[319266]: 2026-02-23 09:59:42.171580043 +0000 UTC m=+0.178782520 container start 590ca35cd7a93ff08eef66a3d5d61b20fe574055fad4fd5e819f0995c0eaf782 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e4091a1-2383-4121-a743-ef58bee4785d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:59:42 localhost dnsmasq[319284]: started, version 2.85 cachesize 150 Feb 23 04:59:42 localhost dnsmasq[319284]: DNS service limited to local subnets Feb 23 04:59:42 localhost dnsmasq[319284]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:59:42 localhost dnsmasq[319284]: warning: no upstream servers configured Feb 23 04:59:42 localhost dnsmasq-dhcp[319284]: DHCP, static leases only on 10.101.0.0, lease time 1d Feb 23 04:59:42 localhost dnsmasq[319284]: read /var/lib/neutron/dhcp/6e4091a1-2383-4121-a743-ef58bee4785d/addn_hosts - 0 addresses Feb 23 04:59:42 localhost dnsmasq-dhcp[319284]: read /var/lib/neutron/dhcp/6e4091a1-2383-4121-a743-ef58bee4785d/host Feb 23 04:59:42 localhost dnsmasq-dhcp[319284]: read /var/lib/neutron/dhcp/6e4091a1-2383-4121-a743-ef58bee4785d/opts Feb 23 04:59:42 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:42.364 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 04:59:42 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:42.381 263679 INFO neutron.agent.dhcp.agent [None req-05da7710-240e-43cb-875c-06238cc120ce - - - - - -] DHCP configuration for ports {'b87fdeab-0085-4b9d-a3d7-83cfbec178fe'} is completed#033[00m Feb 23 04:59:42 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:42.397 2 INFO neutron.agent.securitygroups_rpc [None req-ea1ece96-8ed7-4628-9eb3-8fcb879af238 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:42 localhost podman[241086]: time="2026-02-23T09:59:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 04:59:42 localhost podman[241086]: @ - - [23/Feb/2026:09:59:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157718 "" "Go-http-client/1.1" Feb 23 04:59:42 localhost podman[241086]: @ - - [23/Feb/2026:09:59:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18768 "" "Go-http-client/1.1" Feb 23 04:59:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v336: 177 pgs: 177 active+clean; 288 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 17 MiB/s wr, 223 op/s Feb 23 04:59:43 localhost nova_compute[280321]: 2026-02-23 09:59:43.418 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:43 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:43.476 2 INFO neutron.agent.securitygroups_rpc [None req-cd7d52bd-6d88-4de8-a801-ff1f615f2e4b 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:43 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e161 e161: 6 total, 6 up, 6 in Feb 23 04:59:44 localhost nova_compute[280321]: 2026-02-23 09:59:44.087 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 288 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 16 MiB/s wr, 216 op/s Feb 23 04:59:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:46 localhost sshd[319286]: main: sshd: ssh-rsa algorithm is disabled Feb 23 04:59:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 04:59:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 04:59:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e162 e162: 6 total, 6 up, 6 in Feb 23 04:59:47 localhost podman[319288]: 2026-02-23 09:59:47.024853175 +0000 UTC m=+0.092747363 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:59:47 localhost podman[319289]: 2026-02-23 09:59:47.091753438 +0000 UTC m=+0.157371376 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, config_id=ceilometer_agent_compute) Feb 23 04:59:47 localhost podman[319289]: 2026-02-23 09:59:47.10687202 +0000 UTC m=+0.172489958 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 04:59:47 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 04:59:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e163 e163: 6 total, 6 up, 6 in Feb 23 04:59:47 localhost podman[319288]: 2026-02-23 09:59:47.16156172 +0000 UTC m=+0.229455908 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 04:59:47 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 04:59:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v341: 177 pgs: 177 active+clean; 479 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 2.9 MiB/s rd, 41 MiB/s wr, 376 op/s Feb 23 04:59:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:59:48 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1701378987' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:59:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:59:48 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1701378987' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.315 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.316 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.316 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.330 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port c17564ca-1e5a-4b34-849d-fc29407f0201 with type ""#033[00m Feb 23 04:59:48 localhost ovn_controller[155966]: 2026-02-23T09:59:48Z|00295|binding|INFO|Removing iface tap3508a2da-25 ovn-installed in OVS Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.331 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-6e4091a1-2383-4121-a743-ef58bee4785d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e4091a1-2383-4121-a743-ef58bee4785d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20ee87502ddb4dde82419e1f4302f590', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=407aeab9-650a-4861-88f9-b68d79a26eca, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3508a2da-2572-4507-affc-3ad293e1e24b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:48 localhost ovn_controller[155966]: 2026-02-23T09:59:48Z|00296|binding|INFO|Removing lport 3508a2da-2572-4507-affc-3ad293e1e24b ovn-installed in OVS Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.333 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 3508a2da-2572-4507-affc-3ad293e1e24b in datapath 6e4091a1-2383-4121-a743-ef58bee4785d unbound from our chassis#033[00m Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.337 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e4091a1-2383-4121-a743-ef58bee4785d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.339 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8fa414-f8aa-40cb-8d18-8c6a527f1cb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.375 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.425 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:48 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:48.445 263679 INFO neutron.agent.linux.ip_lib [None req-e2354223-271c-4a21-bc34-c570f302e697 - - - - - -] Device tap9fd42a8d-58 cannot be used as it has no MAC address#033[00m Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.471 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:48 localhost kernel: device tap9fd42a8d-58 entered promiscuous mode Feb 23 04:59:48 localhost NetworkManager[5987]: [1771840788.4800] manager: (tap9fd42a8d-58): new Generic device (/org/freedesktop/NetworkManager/Devices/55) Feb 23 04:59:48 localhost ovn_controller[155966]: 2026-02-23T09:59:48Z|00297|binding|INFO|Claiming lport 9fd42a8d-5846-4f10-a290-5afe1dcd9eec for this chassis. Feb 23 04:59:48 localhost ovn_controller[155966]: 2026-02-23T09:59:48Z|00298|binding|INFO|9fd42a8d-5846-4f10-a290-5afe1dcd9eec: Claiming unknown Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.481 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:48 localhost systemd-udevd[319361]: Network interface NamePolicy= disabled on kernel command line. Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.499 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-960d9096-c207-4d94-a5f7-8fee176384ea', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-960d9096-c207-4d94-a5f7-8fee176384ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51dc993993124a3f926af711e8b0f088', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a68f2a8-5a50-4333-9478-4317054e1fd4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9fd42a8d-5846-4f10-a290-5afe1dcd9eec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.502 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 9fd42a8d-5846-4f10-a290-5afe1dcd9eec in datapath 960d9096-c207-4d94-a5f7-8fee176384ea bound to our chassis#033[00m Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.506 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 960d9096-c207-4d94-a5f7-8fee176384ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 04:59:48 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:48.508 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[0265b031-00cf-47cc-8b4b-744f2bb8dbe7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:48 localhost journal[229268]: ethtool ioctl error on tap9fd42a8d-58: No such device Feb 23 04:59:48 localhost dnsmasq[319284]: exiting on receipt of SIGTERM Feb 23 04:59:48 localhost journal[229268]: ethtool ioctl error on tap9fd42a8d-58: No such device Feb 23 04:59:48 localhost podman[319345]: 2026-02-23 09:59:48.521394075 +0000 UTC m=+0.087024450 container kill 590ca35cd7a93ff08eef66a3d5d61b20fe574055fad4fd5e819f0995c0eaf782 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e4091a1-2383-4121-a743-ef58bee4785d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:59:48 localhost systemd[1]: libpod-590ca35cd7a93ff08eef66a3d5d61b20fe574055fad4fd5e819f0995c0eaf782.scope: Deactivated successfully. Feb 23 04:59:48 localhost ovn_controller[155966]: 2026-02-23T09:59:48Z|00299|binding|INFO|Setting lport 9fd42a8d-5846-4f10-a290-5afe1dcd9eec ovn-installed in OVS Feb 23 04:59:48 localhost ovn_controller[155966]: 2026-02-23T09:59:48Z|00300|binding|INFO|Setting lport 9fd42a8d-5846-4f10-a290-5afe1dcd9eec up in Southbound Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.525 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:48 localhost journal[229268]: ethtool ioctl error on tap9fd42a8d-58: No such device Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.527 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:48 localhost journal[229268]: ethtool ioctl error on tap9fd42a8d-58: No such device Feb 23 04:59:48 localhost journal[229268]: ethtool ioctl error on tap9fd42a8d-58: No such device Feb 23 04:59:48 localhost journal[229268]: ethtool ioctl error on tap9fd42a8d-58: No such device Feb 23 04:59:48 localhost journal[229268]: ethtool ioctl error on tap9fd42a8d-58: No such device Feb 23 04:59:48 localhost journal[229268]: ethtool ioctl error on tap9fd42a8d-58: No such device Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.565 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.595 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:48 localhost podman[319377]: 2026-02-23 09:59:48.616133877 +0000 UTC m=+0.065670176 container died 590ca35cd7a93ff08eef66a3d5d61b20fe574055fad4fd5e819f0995c0eaf782 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e4091a1-2383-4121-a743-ef58bee4785d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:59:48 localhost systemd[1]: tmp-crun.XvGVNg.mount: Deactivated successfully. Feb 23 04:59:48 localhost podman[319377]: 2026-02-23 09:59:48.680702829 +0000 UTC m=+0.130239108 container remove 590ca35cd7a93ff08eef66a3d5d61b20fe574055fad4fd5e819f0995c0eaf782 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e4091a1-2383-4121-a743-ef58bee4785d, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0) Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.695 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:48 localhost kernel: device tap3508a2da-25 left promiscuous mode Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.708 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:48 localhost systemd[1]: libpod-conmon-590ca35cd7a93ff08eef66a3d5d61b20fe574055fad4fd5e819f0995c0eaf782.scope: Deactivated successfully. Feb 23 04:59:48 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:48.721 2 INFO neutron.agent.securitygroups_rpc [None req-48538392-c56b-4b21-9b40-cb72b13d2341 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:48 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:48.741 263679 INFO neutron.agent.dhcp.agent [None req-7453a33b-c16d-4d54-85e8-9f325a8a9a13 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:48 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:48.741 263679 INFO neutron.agent.dhcp.agent [None req-7453a33b-c16d-4d54-85e8-9f325a8a9a13 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 04:59:48 localhost nova_compute[280321]: 2026-02-23 09:59:48.989 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:49 localhost nova_compute[280321]: 2026-02-23 09:59:49.089 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 479 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 702 KiB/s rd, 26 MiB/s wr, 304 op/s Feb 23 04:59:49 localhost systemd[1]: var-lib-containers-storage-overlay-8070327831c45b90c990557110ffb97eae23ef1b414d417a67dffd7ea7f5ec10-merged.mount: Deactivated successfully. Feb 23 04:59:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-590ca35cd7a93ff08eef66a3d5d61b20fe574055fad4fd5e819f0995c0eaf782-userdata-shm.mount: Deactivated successfully. Feb 23 04:59:49 localhost systemd[1]: run-netns-qdhcp\x2d6e4091a1\x2d2383\x2d4121\x2da743\x2def58bee4785d.mount: Deactivated successfully. Feb 23 04:59:49 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:49.355 2 INFO neutron.agent.securitygroups_rpc [None req-fe77e4f6-f29f-4a3f-9227-290ccdc2ae2e 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:49 localhost podman[319460]: Feb 23 04:59:49 localhost podman[319460]: 2026-02-23 09:59:49.49713935 +0000 UTC m=+0.085262525 container create f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-960d9096-c207-4d94-a5f7-8fee176384ea, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:59:49 localhost systemd[1]: Started libpod-conmon-f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db.scope. Feb 23 04:59:49 localhost podman[319460]: 2026-02-23 09:59:49.45032703 +0000 UTC m=+0.038450205 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 04:59:49 localhost systemd[1]: Started libcrun container. Feb 23 04:59:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23399caa8c743db5a86fb23d3cbba8a16f538e83f795370bdf5b066155828b98/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 04:59:49 localhost podman[319460]: 2026-02-23 09:59:49.589724927 +0000 UTC m=+0.177848122 container init f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-960d9096-c207-4d94-a5f7-8fee176384ea, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 04:59:49 localhost podman[319460]: 2026-02-23 09:59:49.599575428 +0000 UTC m=+0.187698623 container start f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-960d9096-c207-4d94-a5f7-8fee176384ea, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 04:59:49 localhost dnsmasq[319478]: started, version 2.85 cachesize 150 Feb 23 04:59:49 localhost dnsmasq[319478]: DNS service limited to local subnets Feb 23 04:59:49 localhost dnsmasq[319478]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 04:59:49 localhost dnsmasq[319478]: warning: no upstream servers configured Feb 23 04:59:49 localhost dnsmasq-dhcp[319478]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 04:59:49 localhost dnsmasq[319478]: read /var/lib/neutron/dhcp/960d9096-c207-4d94-a5f7-8fee176384ea/addn_hosts - 0 addresses Feb 23 04:59:49 localhost dnsmasq-dhcp[319478]: read /var/lib/neutron/dhcp/960d9096-c207-4d94-a5f7-8fee176384ea/host Feb 23 04:59:49 localhost dnsmasq-dhcp[319478]: read /var/lib/neutron/dhcp/960d9096-c207-4d94-a5f7-8fee176384ea/opts Feb 23 04:59:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:49.811 263679 INFO neutron.agent.dhcp.agent [None req-03a5b52d-178a-414c-b2ca-3aea9c1d297d - - - - - -] DHCP configuration for ports {'d2318928-571e-48d8-ac32-8afc4f395f0a'} is completed#033[00m Feb 23 04:59:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:49.834 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port df678711-a357-4fb6-8e5a-40cac0b6d25f with type ""#033[00m Feb 23 04:59:49 localhost ovn_controller[155966]: 2026-02-23T09:59:49Z|00301|binding|INFO|Removing iface tap9fd42a8d-58 ovn-installed in OVS Feb 23 04:59:49 localhost ovn_controller[155966]: 2026-02-23T09:59:49Z|00302|binding|INFO|Removing lport 9fd42a8d-5846-4f10-a290-5afe1dcd9eec ovn-installed in OVS Feb 23 04:59:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:49.837 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-960d9096-c207-4d94-a5f7-8fee176384ea', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-960d9096-c207-4d94-a5f7-8fee176384ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51dc993993124a3f926af711e8b0f088', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a68f2a8-5a50-4333-9478-4317054e1fd4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9fd42a8d-5846-4f10-a290-5afe1dcd9eec) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:49.839 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 9fd42a8d-5846-4f10-a290-5afe1dcd9eec in datapath 960d9096-c207-4d94-a5f7-8fee176384ea unbound from our chassis#033[00m Feb 23 04:59:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:49.842 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 960d9096-c207-4d94-a5f7-8fee176384ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:49 localhost nova_compute[280321]: 2026-02-23 09:59:49.875 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:49 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:49.843 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[fee782c5-fa9b-484f-a540-24f31c408588]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:49 localhost nova_compute[280321]: 2026-02-23 09:59:49.881 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:49 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 04:59:49 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/144846476' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 04:59:49 localhost kernel: device tap9fd42a8d-58 left promiscuous mode Feb 23 04:59:49 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 04:59:49 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/144846476' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 04:59:49 localhost nova_compute[280321]: 2026-02-23 09:59:49.898 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:50 localhost dnsmasq[319478]: read /var/lib/neutron/dhcp/960d9096-c207-4d94-a5f7-8fee176384ea/addn_hosts - 0 addresses Feb 23 04:59:50 localhost dnsmasq-dhcp[319478]: read /var/lib/neutron/dhcp/960d9096-c207-4d94-a5f7-8fee176384ea/host Feb 23 04:59:50 localhost podman[319498]: 2026-02-23 09:59:50.308571438 +0000 UTC m=+0.062097937 container kill f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-960d9096-c207-4d94-a5f7-8fee176384ea, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 04:59:50 localhost dnsmasq-dhcp[319478]: read /var/lib/neutron/dhcp/960d9096-c207-4d94-a5f7-8fee176384ea/opts Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent [None req-bfe6e4a4-6af1-495b-a297-47457c22a0bf - - - - - -] Unable to reload_allocations dhcp for 960d9096-c207-4d94-a5f7-8fee176384ea.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9fd42a8d-58 not found in namespace qdhcp-960d9096-c207-4d94-a5f7-8fee176384ea. Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent return fut.result() Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent return self.__get_result() Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent raise self._exception Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9fd42a8d-58 not found in namespace qdhcp-960d9096-c207-4d94-a5f7-8fee176384ea. Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.335 263679 ERROR neutron.agent.dhcp.agent #033[00m Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.340 263679 INFO neutron.agent.dhcp.agent [None req-20fd52a2-7161-4890-a0c0-7c4ab9e9d17f - - - - - -] Synchronizing state#033[00m Feb 23 04:59:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:50 localhost nova_compute[280321]: 2026-02-23 09:59:50.506 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:50.599 263679 INFO neutron.agent.dhcp.agent [None req-407eb9dd-2a0d-4b2e-8716-d97363530a59 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 23 04:59:50 localhost dnsmasq[319478]: exiting on receipt of SIGTERM Feb 23 04:59:50 localhost systemd[1]: tmp-crun.F2Diem.mount: Deactivated successfully. Feb 23 04:59:50 localhost podman[319528]: 2026-02-23 09:59:50.781396557 +0000 UTC m=+0.063711076 container kill f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-960d9096-c207-4d94-a5f7-8fee176384ea, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 04:59:50 localhost systemd[1]: libpod-f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db.scope: Deactivated successfully. Feb 23 04:59:50 localhost podman[319542]: 2026-02-23 09:59:50.845622538 +0000 UTC m=+0.049944326 container died f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-960d9096-c207-4d94-a5f7-8fee176384ea, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 04:59:50 localhost systemd[1]: tmp-crun.hPMaqF.mount: Deactivated successfully. Feb 23 04:59:50 localhost podman[319542]: 2026-02-23 09:59:50.931865312 +0000 UTC m=+0.136187050 container cleanup f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-960d9096-c207-4d94-a5f7-8fee176384ea, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 04:59:50 localhost systemd[1]: libpod-conmon-f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db.scope: Deactivated successfully. Feb 23 04:59:50 localhost podman[319544]: 2026-02-23 09:59:50.957647249 +0000 UTC m=+0.151133956 container remove f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-960d9096-c207-4d94-a5f7-8fee176384ea, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 04:59:51 localhost neutron_dhcp_agent[263675]: 2026-02-23 09:59:51.009 263679 INFO neutron.agent.dhcp.agent [None req-cd9c4f35-90b9-4ddf-ba1f-75ccd84f0f6c - - - - - -] Synchronizing state complete#033[00m Feb 23 04:59:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 543 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 632 KiB/s rd, 31 MiB/s wr, 330 op/s Feb 23 04:59:51 localhost systemd[1]: var-lib-containers-storage-overlay-23399caa8c743db5a86fb23d3cbba8a16f538e83f795370bdf5b066155828b98-merged.mount: Deactivated successfully. Feb 23 04:59:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f972fdf17070a716a1074282a101dd8866e58395898d3e883b46360c950d92db-userdata-shm.mount: Deactivated successfully. Feb 23 04:59:51 localhost systemd[1]: run-netns-qdhcp\x2d960d9096\x2dc207\x2d4d94\x2da5f7\x2d8fee176384ea.mount: Deactivated successfully. Feb 23 04:59:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 04:59:51 localhost podman[319574]: 2026-02-23 09:59:51.423853876 +0000 UTC m=+0.083486211 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 04:59:51 localhost podman[319574]: 2026-02-23 09:59:51.434763789 +0000 UTC m=+0.094396144 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 04:59:51 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 04:59:52 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e164 e164: 6 total, 6 up, 6 in Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.249580) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840792249660, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 552, "num_deletes": 255, "total_data_size": 503428, "memory_usage": 513592, "flush_reason": "Manual Compaction"} Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840792255652, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 328652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25563, "largest_seqno": 26110, "table_properties": {"data_size": 325799, "index_size": 836, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7459, "raw_average_key_size": 20, "raw_value_size": 319910, "raw_average_value_size": 878, "num_data_blocks": 36, "num_entries": 364, "num_filter_entries": 364, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840777, "oldest_key_time": 1771840777, "file_creation_time": 1771840792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 6103 microseconds, and 2008 cpu microseconds. Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.255689) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 328652 bytes OK Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.255711) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.258602) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.258625) EVENT_LOG_v1 {"time_micros": 1771840792258619, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.258650) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 500137, prev total WAL file size 500137, number of live WAL files 2. Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.259243) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(320KB)], [39(17MB)] Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840792259292, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 18704430, "oldest_snapshot_seqno": -1} Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 12833 keys, 17422724 bytes, temperature: kUnknown Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840792341402, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 17422724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17347305, "index_size": 42230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32133, "raw_key_size": 344271, "raw_average_key_size": 26, "raw_value_size": 17126478, "raw_average_value_size": 1334, "num_data_blocks": 1604, "num_entries": 12833, "num_filter_entries": 12833, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.341665) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 17422724 bytes Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.343331) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.5 rd, 212.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 17.5 +0.0 blob) out(16.6 +0.0 blob), read-write-amplify(109.9) write-amplify(53.0) OK, records in: 13358, records dropped: 525 output_compression: NoCompression Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.343350) EVENT_LOG_v1 {"time_micros": 1771840792343342, "job": 22, "event": "compaction_finished", "compaction_time_micros": 82201, "compaction_time_cpu_micros": 51543, "output_level": 6, "num_output_files": 1, "total_output_size": 17422724, "num_input_records": 13358, "num_output_records": 12833, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840792343521, "job": 22, "event": "table_file_deletion", "file_number": 41} Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840792345325, "job": 22, "event": "table_file_deletion", "file_number": 39} Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.259143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.345483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.345493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.345497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.345500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:52 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:52.345503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:53 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:53.169 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:79:d0 2001:db8:0:1:f816:3eff:fe4b:79d0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8f6a957-d664-4eea-abeb-3145dfe24d16, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f9f417b1-97ff-4889-a004-ebc80afbc25a) old=Port_Binding(mac=['fa:16:3e:4b:79:d0 2001:db8::f816:3eff:fe4b:79d0'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4b:79d0/64', 'neutron:device_id': 'ovnmeta-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c73ae202-1b92-44f9-b55c-a6eba0a348b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bed3d4ae9fd4fe7b9440b4587246c14', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 04:59:53 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:53.171 161842 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f9f417b1-97ff-4889-a004-ebc80afbc25a in datapath c73ae202-1b92-44f9-b55c-a6eba0a348b0 updated#033[00m Feb 23 04:59:53 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:53.174 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c73ae202-1b92-44f9-b55c-a6eba0a348b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 04:59:53 localhost ovn_metadata_agent[161837]: 2026-02-23 09:59:53.175 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[3c22056b-9c98-4d9e-bae8-b9e0233d8f81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 04:59:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 615 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 126 KiB/s rd, 22 MiB/s wr, 173 op/s Feb 23 04:59:53 localhost nova_compute[280321]: 2026-02-23 09:59:53.464 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:53 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:53.709 2 INFO neutron.agent.securitygroups_rpc [None req-e07daaf7-bd51-4e1d-9776-48aaf7c6d99d 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:54 localhost nova_compute[280321]: 2026-02-23 09:59:54.090 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:54 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:54.178 2 INFO neutron.agent.securitygroups_rpc [None req-0c26025a-3ebd-4293-969f-53f7049344a7 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 04:59:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 23 04:59:55 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3080484500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 23 04:59:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 615 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 97 KiB/s rd, 17 MiB/s wr, 133 op/s Feb 23 04:59:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e165 e165: 6 total, 6 up, 6 in Feb 23 04:59:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 04:59:56 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e166 e166: 6 total, 6 up, 6 in Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.187100) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840797187159, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 335, "num_deletes": 250, "total_data_size": 147711, "memory_usage": 153648, "flush_reason": "Manual Compaction"} Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840797190257, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 96107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26115, "largest_seqno": 26445, "table_properties": {"data_size": 94000, "index_size": 282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6006, "raw_average_key_size": 20, "raw_value_size": 89689, "raw_average_value_size": 307, "num_data_blocks": 12, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840792, "oldest_key_time": 1771840792, "file_creation_time": 1771840797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 3206 microseconds, and 1199 cpu microseconds. Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.190306) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 96107 bytes OK Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.190335) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.192581) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.192614) EVENT_LOG_v1 {"time_micros": 1771840797192605, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.192643) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 145368, prev total WAL file size 145692, number of live WAL files 2. Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.194542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303033' seq:72057594037927935, type:22 .. '6D6772737461740034323534' seq:0, type:0; will stop at (end) Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(93KB)], [42(16MB)] Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840797194602, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 17518831, "oldest_snapshot_seqno": -1} Feb 23 04:59:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v349: 177 pgs: 177 active+clean; 751 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 198 KiB/s rd, 35 MiB/s wr, 263 op/s Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 12613 keys, 15414046 bytes, temperature: kUnknown Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840797270769, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 15414046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15344941, "index_size": 36492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31557, "raw_key_size": 339949, "raw_average_key_size": 26, "raw_value_size": 15132743, "raw_average_value_size": 1199, "num_data_blocks": 1365, "num_entries": 12613, "num_filter_entries": 12613, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.271091) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 15414046 bytes Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.272543) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.7 rd, 202.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 16.6 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(342.7) write-amplify(160.4) OK, records in: 13125, records dropped: 512 output_compression: NoCompression Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.272571) EVENT_LOG_v1 {"time_micros": 1771840797272559, "job": 24, "event": "compaction_finished", "compaction_time_micros": 76277, "compaction_time_cpu_micros": 47770, "output_level": 6, "num_output_files": 1, "total_output_size": 15414046, "num_input_records": 13125, "num_output_records": 12613, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840797272743, "job": 24, "event": "table_file_deletion", "file_number": 44} Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840797275299, "job": 24, "event": "table_file_deletion", "file_number": 42} Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.194374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.275372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.275381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.275384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.275387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:57 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-09:59:57.275391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 04:59:57 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e167 e167: 6 total, 6 up, 6 in Feb 23 04:59:57 localhost neutron_sriov_agent[256355]: 2026-02-23 09:59:57.936 2 INFO neutron.agent.securitygroups_rpc [None req-d119b339-8a48-47e8-bd68-4b4fb753cf46 92730c8dc08c46ec9f30a1ded731d654 b7501fe3a8904b43b875ec99452354a0 - - default default] Security group rule updated ['9373b311-126d-4dcc-ae54-c4b5d87c2dd5']#033[00m Feb 23 04:59:58 localhost nova_compute[280321]: 2026-02-23 09:59:58.469 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:59 localhost nova_compute[280321]: 2026-02-23 09:59:59.117 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 04:59:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 751 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 131 KiB/s rd, 23 MiB/s wr, 178 op/s Feb 23 04:59:59 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 23 04:59:59 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1283440830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 23 05:00:00 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:00.075 2 INFO neutron.agent.securitygroups_rpc [None req-d445cf08-2b11-449c-8097-70ad611d5c5d 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 05:00:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 05:00:00 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:00.807 2 INFO neutron.agent.securitygroups_rpc [None req-f716d86f-e353-494e-a028-e9ccf623762c 982b83c89c37422a910f5359ef7b6ea5 5bed3d4ae9fd4fe7b9440b4587246c14 - - default default] Security group member updated ['3f91b09d-b6ac-403f-adaf-7a684ee36fe5']#033[00m Feb 23 05:00:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 807 MiB data, 2.6 GiB used, 39 GiB / 42 GiB avail; 160 KiB/s rd, 32 MiB/s wr, 225 op/s Feb 23 05:00:01 localhost ceph-mon[296755]: overall HEALTH_OK Feb 23 05:00:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e168 e168: 6 total, 6 up, 6 in Feb 23 05:00:01 localhost openstack_network_exporter[243519]: ERROR 10:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:00:01 localhost openstack_network_exporter[243519]: Feb 23 05:00:01 localhost openstack_network_exporter[243519]: ERROR 10:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:00:01 localhost openstack_network_exporter[243519]: Feb 23 05:00:02 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e169 e169: 6 total, 6 up, 6 in Feb 23 05:00:02 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:02.383 263679 INFO neutron.agent.linux.ip_lib [None req-bbac4efc-cf3e-4bf9-9ef4-c64da920307a - - - - - -] Device tap3acae74f-cc cannot be used as it has no MAC address#033[00m Feb 23 05:00:02 localhost nova_compute[280321]: 2026-02-23 10:00:02.404 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:02 localhost kernel: device tap3acae74f-cc entered promiscuous mode Feb 23 05:00:02 localhost NetworkManager[5987]: [1771840802.4134] manager: (tap3acae74f-cc): new Generic device (/org/freedesktop/NetworkManager/Devices/56) Feb 23 05:00:02 localhost nova_compute[280321]: 2026-02-23 10:00:02.413 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:02 localhost ovn_controller[155966]: 2026-02-23T10:00:02Z|00303|binding|INFO|Claiming lport 3acae74f-ccfd-432f-9154-5c957a1f9bdc for this chassis. Feb 23 05:00:02 localhost ovn_controller[155966]: 2026-02-23T10:00:02Z|00304|binding|INFO|3acae74f-ccfd-432f-9154-5c957a1f9bdc: Claiming unknown Feb 23 05:00:02 localhost systemd-udevd[319607]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:00:02 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:02.428 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-8537eb5c-c116-4a5f-a640-fe4bf0d7b378', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8537eb5c-c116-4a5f-a640-fe4bf0d7b378', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51dc993993124a3f926af711e8b0f088', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c2b012-5bd6-4f71-a34b-40d65965fb9b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3acae74f-ccfd-432f-9154-5c957a1f9bdc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:02 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:02.431 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 3acae74f-ccfd-432f-9154-5c957a1f9bdc in datapath 8537eb5c-c116-4a5f-a640-fe4bf0d7b378 bound to our chassis#033[00m Feb 23 05:00:02 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:02.433 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8537eb5c-c116-4a5f-a640-fe4bf0d7b378 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:00:02 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:02.434 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3ce83d-f7c6-4c8d-8a74-3fd48b0e4d2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:02 localhost journal[229268]: ethtool ioctl error on tap3acae74f-cc: No such device Feb 23 05:00:02 localhost ovn_controller[155966]: 2026-02-23T10:00:02Z|00305|binding|INFO|Setting lport 3acae74f-ccfd-432f-9154-5c957a1f9bdc ovn-installed in OVS Feb 23 05:00:02 localhost ovn_controller[155966]: 2026-02-23T10:00:02Z|00306|binding|INFO|Setting lport 3acae74f-ccfd-432f-9154-5c957a1f9bdc up in Southbound Feb 23 05:00:02 localhost journal[229268]: ethtool ioctl error on tap3acae74f-cc: No such device Feb 23 05:00:02 localhost nova_compute[280321]: 2026-02-23 10:00:02.453 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:02 localhost journal[229268]: ethtool ioctl error on tap3acae74f-cc: No such device Feb 23 05:00:02 localhost journal[229268]: ethtool ioctl error on tap3acae74f-cc: No such device Feb 23 05:00:02 localhost journal[229268]: ethtool ioctl error on tap3acae74f-cc: No such device Feb 23 05:00:02 localhost journal[229268]: ethtool ioctl error on tap3acae74f-cc: No such device Feb 23 05:00:02 localhost journal[229268]: ethtool ioctl error on tap3acae74f-cc: No such device Feb 23 05:00:02 localhost journal[229268]: ethtool ioctl error on tap3acae74f-cc: No such device Feb 23 05:00:02 localhost nova_compute[280321]: 2026-02-23 10:00:02.502 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:02 localhost nova_compute[280321]: 2026-02-23 10:00:02.536 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:02 localhost nova_compute[280321]: 2026-02-23 10:00:02.676 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:00:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 855 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 74 KiB/s rd, 17 MiB/s wr, 103 op/s Feb 23 05:00:03 localhost nova_compute[280321]: 2026-02-23 10:00:03.496 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:03 localhost podman[319678]: Feb 23 05:00:03 localhost podman[319678]: 2026-02-23 10:00:03.549262875 +0000 UTC m=+0.123162142 container create eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8537eb5c-c116-4a5f-a640-fe4bf0d7b378, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0) Feb 23 05:00:03 localhost systemd[1]: Started libpod-conmon-eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d.scope. Feb 23 05:00:03 localhost podman[319678]: 2026-02-23 10:00:03.502846467 +0000 UTC m=+0.076745774 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:00:03 localhost systemd[1]: Started libcrun container. Feb 23 05:00:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb79d246db357053601e9a12a7b7879b028b79e125bdd803644ba9c0924ee3e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:00:03 localhost podman[319678]: 2026-02-23 10:00:03.633562829 +0000 UTC m=+0.207462126 container init eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8537eb5c-c116-4a5f-a640-fe4bf0d7b378, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:00:03 localhost podman[319678]: 2026-02-23 10:00:03.643457672 +0000 UTC m=+0.217356949 container start eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8537eb5c-c116-4a5f-a640-fe4bf0d7b378, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216) Feb 23 05:00:03 localhost dnsmasq[319696]: started, version 2.85 cachesize 150 Feb 23 05:00:03 localhost dnsmasq[319696]: DNS service limited to local subnets Feb 23 05:00:03 localhost dnsmasq[319696]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:00:03 localhost dnsmasq[319696]: warning: no upstream servers configured Feb 23 05:00:03 localhost dnsmasq-dhcp[319696]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:00:03 localhost dnsmasq[319696]: read /var/lib/neutron/dhcp/8537eb5c-c116-4a5f-a640-fe4bf0d7b378/addn_hosts - 0 addresses Feb 23 05:00:03 localhost dnsmasq-dhcp[319696]: read /var/lib/neutron/dhcp/8537eb5c-c116-4a5f-a640-fe4bf0d7b378/host Feb 23 05:00:03 localhost dnsmasq-dhcp[319696]: read /var/lib/neutron/dhcp/8537eb5c-c116-4a5f-a640-fe4bf0d7b378/opts Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.106 263679 INFO neutron.agent.dhcp.agent [None req-ac55045b-44c3-4815-9ba0-3410ba0d6233 - - - - - -] DHCP configuration for ports {'972a6e29-5c14-4877-b019-fa007f221cc9'} is completed#033[00m Feb 23 05:00:04 localhost nova_compute[280321]: 2026-02-23 10:00:04.118 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:00:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/252796869' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:00:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:00:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/252796869' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:00:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2daba99f-25c4-4b16-a8ea-ce269d15600b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:00:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2daba99f-25c4-4b16-a8ea-ce269d15600b, vol_name:cephfs) < "" Feb 23 05:00:04 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:04.196+0000 7fc3ba4ad640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:04 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:04 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:04.196+0000 7fc3ba4ad640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:04 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:04 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:04.196+0000 7fc3ba4ad640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:04 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:04 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:04.196+0000 7fc3ba4ad640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:04 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:04 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:04.196+0000 7fc3ba4ad640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:04 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e170 e170: 6 total, 6 up, 6 in Feb 23 05:00:04 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2daba99f-25c4-4b16-a8ea-ce269d15600b/.meta.tmp' Feb 23 05:00:04 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2daba99f-25c4-4b16-a8ea-ce269d15600b/.meta.tmp' to config b'/volumes/_nogroup/2daba99f-25c4-4b16-a8ea-ce269d15600b/.meta' Feb 23 05:00:04 localhost ovn_controller[155966]: 2026-02-23T10:00:04Z|00307|binding|INFO|Removing iface tap3acae74f-cc ovn-installed in OVS Feb 23 05:00:04 localhost ovn_controller[155966]: 2026-02-23T10:00:04Z|00308|binding|INFO|Removing lport 3acae74f-ccfd-432f-9154-5c957a1f9bdc ovn-installed in OVS Feb 23 05:00:04 localhost nova_compute[280321]: 2026-02-23 10:00:04.369 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:04.369 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 9fa6a72c-9595-4f85-8af8-e1f0b544f0e8 with type ""#033[00m Feb 23 05:00:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:04.371 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-8537eb5c-c116-4a5f-a640-fe4bf0d7b378', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8537eb5c-c116-4a5f-a640-fe4bf0d7b378', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51dc993993124a3f926af711e8b0f088', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c2b012-5bd6-4f71-a34b-40d65965fb9b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3acae74f-ccfd-432f-9154-5c957a1f9bdc) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:04.373 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 3acae74f-ccfd-432f-9154-5c957a1f9bdc in datapath 8537eb5c-c116-4a5f-a640-fe4bf0d7b378 unbound from our chassis#033[00m Feb 23 05:00:04 localhost kernel: device tap3acae74f-cc left promiscuous mode Feb 23 05:00:04 localhost nova_compute[280321]: 2026-02-23 10:00:04.374 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2daba99f-25c4-4b16-a8ea-ce269d15600b, vol_name:cephfs) < "" Feb 23 05:00:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:04.377 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8537eb5c-c116-4a5f-a640-fe4bf0d7b378, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:00:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:04.378 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[6a831020-0fee-4d0f-88bd-f348bae6c41e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2daba99f-25c4-4b16-a8ea-ce269d15600b", "format": "json"}]: dispatch Feb 23 05:00:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2daba99f-25c4-4b16-a8ea-ce269d15600b, vol_name:cephfs) < "" Feb 23 05:00:04 localhost nova_compute[280321]: 2026-02-23 10:00:04.387 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2daba99f-25c4-4b16-a8ea-ce269d15600b, vol_name:cephfs) < "" Feb 23 05:00:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:00:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:00:04 localhost podman[319713]: 2026-02-23 10:00:04.523232057 +0000 UTC m=+0.089991219 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 05:00:04 localhost podman[319713]: 2026-02-23 10:00:04.530650673 +0000 UTC m=+0.097409805 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 05:00:04 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:00:04 localhost podman[319714]: 2026-02-23 10:00:04.583569469 +0000 UTC m=+0.143954996 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_id=openstack_network_exporter, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 05:00:04 localhost podman[319714]: 2026-02-23 10:00:04.597779533 +0000 UTC m=+0.158165020 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.7, release=1770267347) Feb 23 05:00:04 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:00:04 localhost dnsmasq[319696]: read /var/lib/neutron/dhcp/8537eb5c-c116-4a5f-a640-fe4bf0d7b378/addn_hosts - 0 addresses Feb 23 05:00:04 localhost dnsmasq-dhcp[319696]: read /var/lib/neutron/dhcp/8537eb5c-c116-4a5f-a640-fe4bf0d7b378/host Feb 23 05:00:04 localhost dnsmasq-dhcp[319696]: read /var/lib/neutron/dhcp/8537eb5c-c116-4a5f-a640-fe4bf0d7b378/opts Feb 23 05:00:04 localhost podman[319772]: 2026-02-23 10:00:04.821541576 +0000 UTC m=+0.055824336 container kill eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8537eb5c-c116-4a5f-a640-fe4bf0d7b378, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent [None req-8b455bfb-5bee-4a70-94f0-410f628132b2 - - - - - -] Unable to reload_allocations dhcp for 8537eb5c-c116-4a5f-a640-fe4bf0d7b378.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap3acae74f-cc not found in namespace qdhcp-8537eb5c-c116-4a5f-a640-fe4bf0d7b378. Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent return fut.result() Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent return self.__get_result() Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent raise self._exception Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap3acae74f-cc not found in namespace qdhcp-8537eb5c-c116-4a5f-a640-fe4bf0d7b378. Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.842 263679 ERROR neutron.agent.dhcp.agent #033[00m Feb 23 05:00:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:04.845 263679 INFO neutron.agent.dhcp.agent [None req-cd9c4f35-90b9-4ddf-ba1f-75ccd84f0f6c - - - - - -] Synchronizing state#033[00m Feb 23 05:00:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:05.006 263679 INFO neutron.agent.dhcp.agent [None req-ab5c4ec5-b088-4ecc-9aa6-03fcdc1f7725 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 23 05:00:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:05.007 263679 INFO neutron.agent.dhcp.agent [-] Starting network 8537eb5c-c116-4a5f-a640-fe4bf0d7b378 dhcp configuration#033[00m Feb 23 05:00:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:05.007 263679 INFO neutron.agent.dhcp.agent [-] Finished network 8537eb5c-c116-4a5f-a640-fe4bf0d7b378 dhcp configuration#033[00m Feb 23 05:00:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:05.008 263679 INFO neutron.agent.dhcp.agent [None req-ab5c4ec5-b088-4ecc-9aa6-03fcdc1f7725 - - - - - -] Synchronizing state complete#033[00m Feb 23 05:00:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_10:00:05 Feb 23 05:00:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 05:00:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 05:00:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['backups', '.mgr', 'manila_data', 'volumes', 'vms', 'manila_metadata', 'images'] Feb 23 05:00:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 05:00:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:00:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:00:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:00:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:00:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:00:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:00:05 localhost nova_compute[280321]: 2026-02-23 10:00:05.189 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 855 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 74 KiB/s rd, 17 MiB/s wr, 103 op/s Feb 23 05:00:05 localhost dnsmasq[319696]: exiting on receipt of SIGTERM Feb 23 05:00:05 localhost podman[319801]: 2026-02-23 10:00:05.277998635 +0000 UTC m=+0.066003027 container kill eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8537eb5c-c116-4a5f-a640-fe4bf0d7b378, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 05:00:05 localhost systemd[1]: libpod-eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d.scope: Deactivated successfully. Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 05:00:05 localhost podman[319813]: 2026-02-23 10:00:05.336966875 +0000 UTC m=+0.047889473 container died eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8537eb5c-c116-4a5f-a640-fe4bf0d7b378, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006590297407174763 of space, bias 1.0, pg target 1.3180594814349524 quantized to 32 (current 32) Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014867450679322538 of space, bias 1.0, pg target 0.29586226851851855 quantized to 32 (current 32) Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.044500933486878835 of space, bias 1.0, pg target 8.855685763888888 quantized to 32 (current 32) Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.00025945337218499906 quantized to 32 (current 32) Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:00:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0018680642797319934 quantized to 16 (current 16) Feb 23 05:00:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 05:00:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:00:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 05:00:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:00:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:00:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:00:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:00:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:00:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:00:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:00:05 localhost podman[319813]: 2026-02-23 10:00:05.371892662 +0000 UTC m=+0.082815230 container cleanup eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8537eb5c-c116-4a5f-a640-fe4bf0d7b378, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 05:00:05 localhost systemd[1]: libpod-conmon-eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d.scope: Deactivated successfully. Feb 23 05:00:05 localhost podman[319820]: 2026-02-23 10:00:05.392193412 +0000 UTC m=+0.089344009 container remove eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8537eb5c-c116-4a5f-a640-fe4bf0d7b378, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216) Feb 23 05:00:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 05:00:05 localhost systemd[1]: var-lib-containers-storage-overlay-ddb79d246db357053601e9a12a7b7879b028b79e125bdd803644ba9c0924ee3e-merged.mount: Deactivated successfully. Feb 23 05:00:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eae95c5bc21071dc7f142c0b241e44fad4744556f40e1a420a062d0edd69a47d-userdata-shm.mount: Deactivated successfully. Feb 23 05:00:05 localhost systemd[1]: run-netns-qdhcp\x2d8537eb5c\x2dc116\x2d4a5f\x2da640\x2dfe4bf0d7b378.mount: Deactivated successfully. Feb 23 05:00:06 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:06.868 263679 INFO neutron.agent.linux.ip_lib [None req-7e0c6ffd-4257-4808-9e66-3dd048dc9552 - - - - - -] Device tap3ad609ea-74 cannot be used as it has no MAC address#033[00m Feb 23 05:00:06 localhost nova_compute[280321]: 2026-02-23 10:00:06.895 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:06 localhost kernel: device tap3ad609ea-74 entered promiscuous mode Feb 23 05:00:06 localhost NetworkManager[5987]: [1771840806.9038] manager: (tap3ad609ea-74): new Generic device (/org/freedesktop/NetworkManager/Devices/57) Feb 23 05:00:06 localhost nova_compute[280321]: 2026-02-23 10:00:06.903 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:06 localhost ovn_controller[155966]: 2026-02-23T10:00:06Z|00309|binding|INFO|Claiming lport 3ad609ea-7485-4004-89d7-c1e1046731e6 for this chassis. Feb 23 05:00:06 localhost ovn_controller[155966]: 2026-02-23T10:00:06Z|00310|binding|INFO|3ad609ea-7485-4004-89d7-c1e1046731e6: Claiming unknown Feb 23 05:00:06 localhost systemd-udevd[319851]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:00:06 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:06.917 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-fdca5942-453c-4af4-a169-02aced21c508', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdca5942-453c-4af4-a169-02aced21c508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51dc993993124a3f926af711e8b0f088', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f13bbd5-87be-417e-803e-932b2fe133a1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3ad609ea-7485-4004-89d7-c1e1046731e6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:06 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:06.919 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 3ad609ea-7485-4004-89d7-c1e1046731e6 in datapath fdca5942-453c-4af4-a169-02aced21c508 bound to our chassis#033[00m Feb 23 05:00:06 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:06.920 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fdca5942-453c-4af4-a169-02aced21c508 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:00:06 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:06.921 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[850ef8a8-7b95-44aa-b885-68bec762736c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:06 localhost journal[229268]: ethtool ioctl error on tap3ad609ea-74: No such device Feb 23 05:00:06 localhost journal[229268]: ethtool ioctl error on tap3ad609ea-74: No such device Feb 23 05:00:06 localhost ovn_controller[155966]: 2026-02-23T10:00:06Z|00311|binding|INFO|Setting lport 3ad609ea-7485-4004-89d7-c1e1046731e6 ovn-installed in OVS Feb 23 05:00:06 localhost ovn_controller[155966]: 2026-02-23T10:00:06Z|00312|binding|INFO|Setting lport 3ad609ea-7485-4004-89d7-c1e1046731e6 up in Southbound Feb 23 05:00:06 localhost journal[229268]: ethtool ioctl error on tap3ad609ea-74: No such device Feb 23 05:00:06 localhost nova_compute[280321]: 2026-02-23 10:00:06.949 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:06 localhost journal[229268]: ethtool ioctl error on tap3ad609ea-74: No such device Feb 23 05:00:06 localhost journal[229268]: ethtool ioctl error on tap3ad609ea-74: No such device Feb 23 05:00:06 localhost journal[229268]: ethtool ioctl error on tap3ad609ea-74: No such device Feb 23 05:00:06 localhost journal[229268]: ethtool ioctl error on tap3ad609ea-74: No such device Feb 23 05:00:06 localhost journal[229268]: ethtool ioctl error on tap3ad609ea-74: No such device Feb 23 05:00:06 localhost nova_compute[280321]: 2026-02-23 10:00:06.983 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:07 localhost nova_compute[280321]: 2026-02-23 10:00:07.017 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 991 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 112 KiB/s rd, 31 MiB/s wr, 155 op/s Feb 23 05:00:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e171 e171: 6 total, 6 up, 6 in Feb 23 05:00:07 localhost podman[319922]: Feb 23 05:00:07 localhost podman[319922]: 2026-02-23 10:00:07.989130453 +0000 UTC m=+0.091377142 container create e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca5942-453c-4af4-a169-02aced21c508, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 05:00:08 localhost systemd[1]: Started libpod-conmon-e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3.scope. Feb 23 05:00:08 localhost podman[319922]: 2026-02-23 10:00:07.946319936 +0000 UTC m=+0.048566685 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:00:08 localhost systemd[1]: Started libcrun container. Feb 23 05:00:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae9a05c4013c16c9f754f006ccfd61484430f1bdbd547368221101d31ce272/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:00:08 localhost podman[319922]: 2026-02-23 10:00:08.063184974 +0000 UTC m=+0.165431663 container init e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca5942-453c-4af4-a169-02aced21c508, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:00:08 localhost podman[319922]: 2026-02-23 10:00:08.072776687 +0000 UTC m=+0.175023376 container start e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca5942-453c-4af4-a169-02aced21c508, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:00:08 localhost dnsmasq[319941]: started, version 2.85 cachesize 150 Feb 23 05:00:08 localhost dnsmasq[319941]: DNS service limited to local subnets Feb 23 05:00:08 localhost dnsmasq[319941]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:00:08 localhost dnsmasq[319941]: warning: no upstream servers configured Feb 23 05:00:08 localhost dnsmasq-dhcp[319941]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:00:08 localhost dnsmasq[319941]: read /var/lib/neutron/dhcp/fdca5942-453c-4af4-a169-02aced21c508/addn_hosts - 0 addresses Feb 23 05:00:08 localhost dnsmasq-dhcp[319941]: read /var/lib/neutron/dhcp/fdca5942-453c-4af4-a169-02aced21c508/host Feb 23 05:00:08 localhost dnsmasq-dhcp[319941]: read /var/lib/neutron/dhcp/fdca5942-453c-4af4-a169-02aced21c508/opts Feb 23 05:00:08 localhost ovn_controller[155966]: 2026-02-23T10:00:08Z|00313|binding|INFO|Removing iface tap3ad609ea-74 ovn-installed in OVS Feb 23 05:00:08 localhost ovn_controller[155966]: 2026-02-23T10:00:08Z|00314|binding|INFO|Removing lport 3ad609ea-7485-4004-89d7-c1e1046731e6 ovn-installed in OVS Feb 23 05:00:08 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:08.150 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 3f5823f2-661d-48c3-9128-a69d272dda02 with type ""#033[00m Feb 23 05:00:08 localhost nova_compute[280321]: 2026-02-23 10:00:08.152 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:08 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:08.153 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-fdca5942-453c-4af4-a169-02aced21c508', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdca5942-453c-4af4-a169-02aced21c508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51dc993993124a3f926af711e8b0f088', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f13bbd5-87be-417e-803e-932b2fe133a1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3ad609ea-7485-4004-89d7-c1e1046731e6) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:08 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:08.156 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 3ad609ea-7485-4004-89d7-c1e1046731e6 in datapath fdca5942-453c-4af4-a169-02aced21c508 unbound from our chassis#033[00m Feb 23 05:00:08 localhost nova_compute[280321]: 2026-02-23 10:00:08.160 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:08 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:08.160 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fdca5942-453c-4af4-a169-02aced21c508, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:00:08 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:08.161 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f185bb-6d8d-4f0e-b725-febb9bea1839]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:08 localhost nova_compute[280321]: 2026-02-23 10:00:08.166 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:08 localhost kernel: device tap3ad609ea-74 left promiscuous mode Feb 23 05:00:08 localhost nova_compute[280321]: 2026-02-23 10:00:08.183 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.210 263679 INFO neutron.agent.dhcp.agent [None req-61c1084c-c61a-4e51-afd9-13961611c996 - - - - - -] DHCP configuration for ports {'95b1393c-49b0-470a-8035-9f9501dfbfd0'} is completed#033[00m Feb 23 05:00:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e172 e172: 6 total, 6 up, 6 in Feb 23 05:00:08 localhost nova_compute[280321]: 2026-02-23 10:00:08.499 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:08 localhost dnsmasq[319941]: read /var/lib/neutron/dhcp/fdca5942-453c-4af4-a169-02aced21c508/addn_hosts - 0 addresses Feb 23 05:00:08 localhost dnsmasq-dhcp[319941]: read /var/lib/neutron/dhcp/fdca5942-453c-4af4-a169-02aced21c508/host Feb 23 05:00:08 localhost podman[319961]: 2026-02-23 10:00:08.57653976 +0000 UTC m=+0.061356784 container kill e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca5942-453c-4af4-a169-02aced21c508, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:00:08 localhost dnsmasq-dhcp[319941]: read /var/lib/neutron/dhcp/fdca5942-453c-4af4-a169-02aced21c508/opts Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent [None req-476f968d-4a98-47bf-a97e-27e62ebb00d1 - - - - - -] Unable to reload_allocations dhcp for fdca5942-453c-4af4-a169-02aced21c508.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap3ad609ea-74 not found in namespace qdhcp-fdca5942-453c-4af4-a169-02aced21c508. Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent return fut.result() Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent return self.__get_result() Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent raise self._exception Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap3ad609ea-74 not found in namespace qdhcp-fdca5942-453c-4af4-a169-02aced21c508. Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.603 263679 ERROR neutron.agent.dhcp.agent #033[00m Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.607 263679 INFO neutron.agent.dhcp.agent [None req-ab5c4ec5-b088-4ecc-9aa6-03fcdc1f7725 - - - - - -] Synchronizing state#033[00m Feb 23 05:00:08 localhost nova_compute[280321]: 2026-02-23 10:00:08.766 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:00:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:08.954 263679 INFO neutron.agent.dhcp.agent [None req-f001210c-c4ad-4de7-86f3-4785a0ca4ce8 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 23 05:00:09 localhost podman[319974]: 2026-02-23 10:00:09.024373565 +0000 UTC m=+0.089744582 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 05:00:09 localhost podman[319974]: 2026-02-23 10:00:09.093202127 +0000 UTC m=+0.158573184 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.43.0, config_id=ovn_controller, container_name=ovn_controller) Feb 23 05:00:09 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:00:09 localhost nova_compute[280321]: 2026-02-23 10:00:09.121 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:09 localhost dnsmasq[319941]: exiting on receipt of SIGTERM Feb 23 05:00:09 localhost podman[320013]: 2026-02-23 10:00:09.162742991 +0000 UTC m=+0.067456671 container kill e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca5942-453c-4af4-a169-02aced21c508, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:00:09 localhost systemd[1]: libpod-e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3.scope: Deactivated successfully. Feb 23 05:00:09 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2daba99f-25c4-4b16-a8ea-ce269d15600b", "format": "json"}]: dispatch Feb 23 05:00:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2daba99f-25c4-4b16-a8ea-ce269d15600b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2daba99f-25c4-4b16-a8ea-ce269d15600b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:09 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2daba99f-25c4-4b16-a8ea-ce269d15600b' of type subvolume Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.210+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2daba99f-25c4-4b16-a8ea-ce269d15600b' of type subvolume Feb 23 05:00:09 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2daba99f-25c4-4b16-a8ea-ce269d15600b", "force": true, "format": "json"}]: dispatch Feb 23 05:00:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2daba99f-25c4-4b16-a8ea-ce269d15600b, vol_name:cephfs) < "" Feb 23 05:00:09 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2daba99f-25c4-4b16-a8ea-ce269d15600b'' moved to trashcan Feb 23 05:00:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 991 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 66 KiB/s rd, 23 MiB/s wr, 98 op/s Feb 23 05:00:09 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:00:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2daba99f-25c4-4b16-a8ea-ce269d15600b, vol_name:cephfs) < "" Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.242+0000 7fc3bbcb0640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.242+0000 7fc3bbcb0640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.242+0000 7fc3bbcb0640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.242+0000 7fc3bbcb0640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.242+0000 7fc3bbcb0640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost podman[320026]: 2026-02-23 10:00:09.250238842 +0000 UTC m=+0.074452975 container died e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca5942-453c-4af4-a169-02aced21c508, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:00:09 localhost podman[320026]: 2026-02-23 10:00:09.290382718 +0000 UTC m=+0.114596791 container cleanup e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca5942-453c-4af4-a169-02aced21c508, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:00:09 localhost systemd[1]: libpod-conmon-e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3.scope: Deactivated successfully. Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.300+0000 7fc3bc4b1640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.300+0000 7fc3bc4b1640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.300+0000 7fc3bc4b1640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.300+0000 7fc3bc4b1640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:09.300+0000 7fc3bc4b1640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:00:09 localhost podman[320028]: 2026-02-23 10:00:09.351926057 +0000 UTC m=+0.166683190 container remove e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca5942-453c-4af4-a169-02aced21c508, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 05:00:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:09.512 263679 INFO neutron.agent.dhcp.agent [None req-fe50001b-f928-47a9-ab3f-e62a316b0967 - - - - - -] Synchronizing state complete#033[00m Feb 23 05:00:09 localhost systemd[1]: var-lib-containers-storage-overlay-44ae9a05c4013c16c9f754f006ccfd61484430f1bdbd547368221101d31ce272-merged.mount: Deactivated successfully. Feb 23 05:00:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e91bbf5b2002b9f2ff8e1ba170efb05428456db243c5939ffaa60ce65ce211f3-userdata-shm.mount: Deactivated successfully. Feb 23 05:00:09 localhost systemd[1]: run-netns-qdhcp\x2dfdca5942\x2d453c\x2d4af4\x2da169\x2d02aced21c508.mount: Deactivated successfully. Feb 23 05:00:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 23 05:00:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e173 e173: 6 total, 6 up, 6 in Feb 23 05:00:11 localhost nova_compute[280321]: 2026-02-23 10:00:11.170 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:11 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:11.170 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:11 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:11.172 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 05:00:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 82 KiB/s rd, 35 MiB/s wr, 130 op/s Feb 23 05:00:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e174 e174: 6 total, 6 up, 6 in Feb 23 05:00:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e175 e175: 6 total, 6 up, 6 in Feb 23 05:00:12 localhost podman[241086]: time="2026-02-23T10:00:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:00:12 localhost podman[241086]: @ - - [23/Feb/2026:10:00:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1" Feb 23 05:00:12 localhost podman[241086]: @ - - [23/Feb/2026:10:00:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18295 "" "Go-http-client/1.1" Feb 23 05:00:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 1.1 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 108 KiB/s rd, 25 MiB/s wr, 153 op/s Feb 23 05:00:13 localhost nova_compute[280321]: 2026-02-23 10:00:13.501 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:00:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:00:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:00:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:00:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:00:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "format": "json"}]: dispatch Feb 23 05:00:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:00:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:00:14 localhost nova_compute[280321]: 2026-02-23 10:00:14.161 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:14 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:14.274 263679 INFO neutron.agent.linux.ip_lib [None req-ccdcaddd-1b42-4a08-a0c4-72d7446ac00c - - - - - -] Device tap9c772b01-65 cannot be used as it has no MAC address#033[00m Feb 23 05:00:14 localhost nova_compute[280321]: 2026-02-23 10:00:14.298 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:14 localhost kernel: device tap9c772b01-65 entered promiscuous mode Feb 23 05:00:14 localhost ovn_controller[155966]: 2026-02-23T10:00:14Z|00315|binding|INFO|Claiming lport 9c772b01-651d-4d4f-85b4-45f4d5f3294e for this chassis. Feb 23 05:00:14 localhost ovn_controller[155966]: 2026-02-23T10:00:14Z|00316|binding|INFO|9c772b01-651d-4d4f-85b4-45f4d5f3294e: Claiming unknown Feb 23 05:00:14 localhost nova_compute[280321]: 2026-02-23 10:00:14.305 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:14 localhost NetworkManager[5987]: [1771840814.3084] manager: (tap9c772b01-65): new Generic device (/org/freedesktop/NetworkManager/Devices/58) Feb 23 05:00:14 localhost systemd-udevd[320090]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:00:14 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:14.319 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-59df2e79-780a-4c1a-b58d-b81964586353', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59df2e79-780a-4c1a-b58d-b81964586353', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51dc993993124a3f926af711e8b0f088', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0f1a107-5df6-443a-837c-f1a0dff45380, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9c772b01-651d-4d4f-85b4-45f4d5f3294e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:14 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:14.321 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 9c772b01-651d-4d4f-85b4-45f4d5f3294e in datapath 59df2e79-780a-4c1a-b58d-b81964586353 bound to our chassis#033[00m Feb 23 05:00:14 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:14.325 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 59df2e79-780a-4c1a-b58d-b81964586353 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:00:14 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:14.326 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[a806a68d-923e-4b70-bf55-6784db91aea9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:14 localhost ovn_controller[155966]: 2026-02-23T10:00:14Z|00317|binding|INFO|Setting lport 9c772b01-651d-4d4f-85b4-45f4d5f3294e ovn-installed in OVS Feb 23 05:00:14 localhost ovn_controller[155966]: 2026-02-23T10:00:14Z|00318|binding|INFO|Setting lport 9c772b01-651d-4d4f-85b4-45f4d5f3294e up in Southbound Feb 23 05:00:14 localhost nova_compute[280321]: 2026-02-23 10:00:14.354 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:14 localhost nova_compute[280321]: 2026-02-23 10:00:14.399 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:14 localhost nova_compute[280321]: 2026-02-23 10:00:14.436 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e176 e176: 6 total, 6 up, 6 in Feb 23 05:00:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 1.1 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 115 KiB/s rd, 26 MiB/s wr, 163 op/s Feb 23 05:00:15 localhost podman[320143]: Feb 23 05:00:15 localhost podman[320143]: 2026-02-23 10:00:15.372130402 +0000 UTC m=+0.103118730 container create 8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-59df2e79-780a-4c1a-b58d-b81964586353, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:00:15 localhost systemd[1]: Started libpod-conmon-8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1.scope. Feb 23 05:00:15 localhost systemd[1]: Started libcrun container. Feb 23 05:00:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4259b9c485ec536a5728e50632c28765de5ddbd34a287eeb5dd043111308ff1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:00:15 localhost podman[320143]: 2026-02-23 10:00:15.326130648 +0000 UTC m=+0.057119056 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:00:15 localhost podman[320143]: 2026-02-23 10:00:15.434618811 +0000 UTC m=+0.165607189 container init 8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-59df2e79-780a-4c1a-b58d-b81964586353, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true) Feb 23 05:00:15 localhost podman[320143]: 2026-02-23 10:00:15.441715257 +0000 UTC m=+0.172703645 container start 8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-59df2e79-780a-4c1a-b58d-b81964586353, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216) Feb 23 05:00:15 localhost dnsmasq[320161]: started, version 2.85 cachesize 150 Feb 23 05:00:15 localhost dnsmasq[320161]: DNS service limited to local subnets Feb 23 05:00:15 localhost dnsmasq[320161]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:00:15 localhost dnsmasq[320161]: warning: no upstream servers configured Feb 23 05:00:15 localhost dnsmasq-dhcp[320161]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:00:15 localhost dnsmasq[320161]: read /var/lib/neutron/dhcp/59df2e79-780a-4c1a-b58d-b81964586353/addn_hosts - 0 addresses Feb 23 05:00:15 localhost dnsmasq-dhcp[320161]: read /var/lib/neutron/dhcp/59df2e79-780a-4c1a-b58d-b81964586353/host Feb 23 05:00:15 localhost dnsmasq-dhcp[320161]: read /var/lib/neutron/dhcp/59df2e79-780a-4c1a-b58d-b81964586353/opts Feb 23 05:00:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:00:15 localhost ovn_controller[155966]: 2026-02-23T10:00:15Z|00319|binding|INFO|Removing iface tap9c772b01-65 ovn-installed in OVS Feb 23 05:00:15 localhost ovn_controller[155966]: 2026-02-23T10:00:15Z|00320|binding|INFO|Removing lport 9c772b01-651d-4d4f-85b4-45f4d5f3294e ovn-installed in OVS Feb 23 05:00:15 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:15.659 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port ff4d5c8b-77bc-4180-afa5-d603f87d9833 with type ""#033[00m Feb 23 05:00:15 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:15.661 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-59df2e79-780a-4c1a-b58d-b81964586353', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59df2e79-780a-4c1a-b58d-b81964586353', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51dc993993124a3f926af711e8b0f088', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0f1a107-5df6-443a-837c-f1a0dff45380, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9c772b01-651d-4d4f-85b4-45f4d5f3294e) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:15 localhost nova_compute[280321]: 2026-02-23 10:00:15.660 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:15 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:15.662 263679 INFO neutron.agent.dhcp.agent [None req-447af384-8692-46ab-9c94-cf7dfbb5a7bd - - - - - -] DHCP configuration for ports {'4c5586d6-c64e-4072-a936-3d36d5148db7'} is completed#033[00m Feb 23 05:00:15 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:15.665 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 9c772b01-651d-4d4f-85b4-45f4d5f3294e in datapath 59df2e79-780a-4c1a-b58d-b81964586353 unbound from our chassis#033[00m Feb 23 05:00:15 localhost nova_compute[280321]: 2026-02-23 10:00:15.666 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:15 localhost kernel: device tap9c772b01-65 left promiscuous mode Feb 23 05:00:15 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:15.668 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 59df2e79-780a-4c1a-b58d-b81964586353, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:00:15 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:15.669 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[32b9e0e0-53ff-404c-bfaf-ce1567c5f53c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:15 localhost nova_compute[280321]: 2026-02-23 10:00:15.684 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e177 e177: 6 total, 6 up, 6 in Feb 23 05:00:16 localhost dnsmasq[320161]: read /var/lib/neutron/dhcp/59df2e79-780a-4c1a-b58d-b81964586353/addn_hosts - 0 addresses Feb 23 05:00:16 localhost dnsmasq-dhcp[320161]: read /var/lib/neutron/dhcp/59df2e79-780a-4c1a-b58d-b81964586353/host Feb 23 05:00:16 localhost dnsmasq-dhcp[320161]: read /var/lib/neutron/dhcp/59df2e79-780a-4c1a-b58d-b81964586353/opts Feb 23 05:00:16 localhost podman[320179]: 2026-02-23 10:00:16.16619204 +0000 UTC m=+0.064533621 container kill 8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-59df2e79-780a-4c1a-b58d-b81964586353, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 05:00:16 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:16.174 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent [None req-f84b15af-bac5-4365-a306-6b9282225592 - - - - - -] Unable to reload_allocations dhcp for 59df2e79-780a-4c1a-b58d-b81964586353.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9c772b01-65 not found in namespace qdhcp-59df2e79-780a-4c1a-b58d-b81964586353. Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent return fut.result() Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent return self.__get_result() Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent raise self._exception Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9c772b01-65 not found in namespace qdhcp-59df2e79-780a-4c1a-b58d-b81964586353. Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.194 263679 ERROR neutron.agent.dhcp.agent #033[00m Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.199 263679 INFO neutron.agent.dhcp.agent [None req-fe50001b-f928-47a9-ab3f-e62a316b0967 - - - - - -] Synchronizing state#033[00m Feb 23 05:00:16 localhost nova_compute[280321]: 2026-02-23 10:00:16.309 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.345 263679 INFO neutron.agent.dhcp.agent [None req-46d932af-02af-4f29-b038-7ab32bbeb506 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 23 05:00:16 localhost dnsmasq[320161]: exiting on receipt of SIGTERM Feb 23 05:00:16 localhost podman[320209]: 2026-02-23 10:00:16.517062484 +0000 UTC m=+0.051922457 container kill 8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-59df2e79-780a-4c1a-b58d-b81964586353, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2) Feb 23 05:00:16 localhost systemd[1]: libpod-8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1.scope: Deactivated successfully. Feb 23 05:00:16 localhost podman[320222]: 2026-02-23 10:00:16.592038084 +0000 UTC m=+0.059805528 container died 8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-59df2e79-780a-4c1a-b58d-b81964586353, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 05:00:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1-userdata-shm.mount: Deactivated successfully. Feb 23 05:00:16 localhost podman[320222]: 2026-02-23 10:00:16.652959395 +0000 UTC m=+0.120726759 container cleanup 8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-59df2e79-780a-4c1a-b58d-b81964586353, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS) Feb 23 05:00:16 localhost systemd[1]: libpod-conmon-8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1.scope: Deactivated successfully. Feb 23 05:00:16 localhost podman[320228]: 2026-02-23 10:00:16.733145673 +0000 UTC m=+0.188811757 container remove 8a122e6d05eeefe8aa6a49b9d260bf13867ce1ca3eb7a58b0e886894577031b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-59df2e79-780a-4c1a-b58d-b81964586353, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 05:00:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:16.764 263679 INFO neutron.agent.dhcp.agent [None req-a0cbd40c-2813-4e06-8b27-950ce67716f4 - - - - - -] Synchronizing state complete#033[00m Feb 23 05:00:17 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fcbab7a3-8c51-4c59-a723-d901625bf91c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:00:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fcbab7a3-8c51-4c59-a723-d901625bf91c, vol_name:cephfs) < "" Feb 23 05:00:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e178 e178: 6 total, 6 up, 6 in Feb 23 05:00:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:00:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 94 KiB/s rd, 24 MiB/s wr, 141 op/s Feb 23 05:00:17 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fcbab7a3-8c51-4c59-a723-d901625bf91c/.meta.tmp' Feb 23 05:00:17 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fcbab7a3-8c51-4c59-a723-d901625bf91c/.meta.tmp' to config b'/volumes/_nogroup/fcbab7a3-8c51-4c59-a723-d901625bf91c/.meta' Feb 23 05:00:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fcbab7a3-8c51-4c59-a723-d901625bf91c, vol_name:cephfs) < "" Feb 23 05:00:17 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fcbab7a3-8c51-4c59-a723-d901625bf91c", "format": "json"}]: dispatch Feb 23 05:00:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fcbab7a3-8c51-4c59-a723-d901625bf91c, vol_name:cephfs) < "" Feb 23 05:00:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fcbab7a3-8c51-4c59-a723-d901625bf91c, vol_name:cephfs) < "" Feb 23 05:00:17 localhost podman[320246]: 2026-02-23 10:00:17.265391686 +0000 UTC m=+0.083898103 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 23 05:00:17 localhost podman[320246]: 2026-02-23 10:00:17.283076465 +0000 UTC m=+0.101582852 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:00:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:00:17 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:00:17 localhost systemd[1]: var-lib-containers-storage-overlay-e4259b9c485ec536a5728e50632c28765de5ddbd34a287eeb5dd043111308ff1-merged.mount: Deactivated successfully. Feb 23 05:00:17 localhost systemd[1]: run-netns-qdhcp\x2d59df2e79\x2d780a\x2d4c1a\x2db58d\x2db81964586353.mount: Deactivated successfully. Feb 23 05:00:17 localhost systemd[1]: tmp-crun.l1BB0m.mount: Deactivated successfully. Feb 23 05:00:17 localhost podman[320266]: 2026-02-23 10:00:17.416359996 +0000 UTC m=+0.102289924 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 23 05:00:17 localhost podman[320266]: 2026-02-23 10:00:17.455061198 +0000 UTC m=+0.140991126 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:00:17 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:00:18 localhost nova_compute[280321]: 2026-02-23 10:00:18.553 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:19 localhost nova_compute[280321]: 2026-02-23 10:00:19.162 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:19 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e179 e179: 6 total, 6 up, 6 in Feb 23 05:00:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 109 KiB/s rd, 27 MiB/s wr, 163 op/s Feb 23 05:00:19 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:19.947 2 INFO neutron.agent.securitygroups_rpc [req-eeab04f9-52db-422e-9728-ee390003483c req-72446ef4-1fda-4ad3-9026-f16aa7abb40d f49fd8b6937445efab40892d03b375d7 0421515e6bb54dea8db3ed218999e195 - - default default] Security group member updated ['c46df023-9a3e-4c54-a0bb-44b675220af4']#033[00m Feb 23 05:00:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:00:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098, vol_name:cephfs) < "" Feb 23 05:00:20 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098/.meta.tmp' Feb 23 05:00:20 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098/.meta.tmp' to config b'/volumes/_nogroup/ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098/.meta' Feb 23 05:00:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098, vol_name:cephfs) < "" Feb 23 05:00:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098", "format": "json"}]: dispatch Feb 23 05:00:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098, vol_name:cephfs) < "" Feb 23 05:00:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098, vol_name:cephfs) < "" Feb 23 05:00:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:00:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "fcbab7a3-8c51-4c59-a723-d901625bf91c", "new_size": 2147483648, "format": "json"}]: dispatch Feb 23 05:00:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:fcbab7a3-8c51-4c59-a723-d901625bf91c, vol_name:cephfs) < "" Feb 23 05:00:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:fcbab7a3-8c51-4c59-a723-d901625bf91c, vol_name:cephfs) < "" Feb 23 05:00:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 114 KiB/s rd, 27 MiB/s wr, 175 op/s Feb 23 05:00:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e180 e180: 6 total, 6 up, 6 in Feb 23 05:00:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:00:22 localhost podman[320284]: 2026-02-23 10:00:22.001296035 +0000 UTC m=+0.073410653 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 05:00:22 localhost podman[320284]: 2026-02-23 10:00:22.010001661 +0000 UTC m=+0.082116299 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 05:00:22 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:00:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e181 e181: 6 total, 6 up, 6 in Feb 23 05:00:22 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:22.437 263679 INFO neutron.agent.linux.ip_lib [None req-a227a534-8766-482e-a08e-dfe937fc7069 - - - - - -] Device tapd0f4560b-25 cannot be used as it has no MAC address#033[00m Feb 23 05:00:22 localhost nova_compute[280321]: 2026-02-23 10:00:22.500 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:22 localhost kernel: device tapd0f4560b-25 entered promiscuous mode Feb 23 05:00:22 localhost NetworkManager[5987]: [1771840822.5108] manager: (tapd0f4560b-25): new Generic device (/org/freedesktop/NetworkManager/Devices/59) Feb 23 05:00:22 localhost systemd-udevd[320317]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:00:22 localhost ovn_controller[155966]: 2026-02-23T10:00:22Z|00321|binding|INFO|Claiming lport d0f4560b-25bf-480a-ae13-0d272b3e40b7 for this chassis. Feb 23 05:00:22 localhost ovn_controller[155966]: 2026-02-23T10:00:22Z|00322|binding|INFO|d0f4560b-25bf-480a-ae13-0d272b3e40b7: Claiming unknown Feb 23 05:00:22 localhost nova_compute[280321]: 2026-02-23 10:00:22.516 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:22 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:22.527 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e8ea045c-a16c-464d-96bc-2be2a05ee7cc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8ea045c-a16c-464d-96bc-2be2a05ee7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51dc993993124a3f926af711e8b0f088', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b594cd25-c249-43a7-a2e2-d4efb5cabef9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d0f4560b-25bf-480a-ae13-0d272b3e40b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:22 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:22.533 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d0f4560b-25bf-480a-ae13-0d272b3e40b7 in datapath e8ea045c-a16c-464d-96bc-2be2a05ee7cc bound to our chassis#033[00m Feb 23 05:00:22 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:22.535 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e8ea045c-a16c-464d-96bc-2be2a05ee7cc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:00:22 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:22.536 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[0895addc-9bc5-4636-a7c6-fcd2be39fe85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:22 localhost ovn_controller[155966]: 2026-02-23T10:00:22Z|00323|binding|INFO|Setting lport d0f4560b-25bf-480a-ae13-0d272b3e40b7 ovn-installed in OVS Feb 23 05:00:22 localhost ovn_controller[155966]: 2026-02-23T10:00:22Z|00324|binding|INFO|Setting lport d0f4560b-25bf-480a-ae13-0d272b3e40b7 up in Southbound Feb 23 05:00:22 localhost nova_compute[280321]: 2026-02-23 10:00:22.555 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:22 localhost nova_compute[280321]: 2026-02-23 10:00:22.587 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:22 localhost nova_compute[280321]: 2026-02-23 10:00:22.615 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:00:22 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/869925231' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:00:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:00:22 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/869925231' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:00:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 640 MiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 124 KiB/s rd, 11 MiB/s wr, 191 op/s Feb 23 05:00:23 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e182 e182: 6 total, 6 up, 6 in Feb 23 05:00:23 localhost podman[320409]: Feb 23 05:00:23 localhost podman[320409]: 2026-02-23 10:00:23.461933457 +0000 UTC m=+0.098791227 container create 5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e8ea045c-a16c-464d-96bc-2be2a05ee7cc, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 05:00:23 localhost systemd[1]: Started libpod-conmon-5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52.scope. Feb 23 05:00:23 localhost podman[320409]: 2026-02-23 10:00:23.417132139 +0000 UTC m=+0.053989969 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:00:23 localhost systemd[1]: Started libcrun container. Feb 23 05:00:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f07b2c224da54135c9c2f749293ccb29e3fb18092b407e9af888c9af5d4ca809/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:00:23 localhost podman[320409]: 2026-02-23 10:00:23.548382578 +0000 UTC m=+0.185240378 container init 5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e8ea045c-a16c-464d-96bc-2be2a05ee7cc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:00:23 localhost podman[320409]: 2026-02-23 10:00:23.558470196 +0000 UTC m=+0.195327996 container start 5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e8ea045c-a16c-464d-96bc-2be2a05ee7cc, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 05:00:23 localhost dnsmasq[320444]: started, version 2.85 cachesize 150 Feb 23 05:00:23 localhost dnsmasq[320444]: DNS service limited to local subnets Feb 23 05:00:23 localhost dnsmasq[320444]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:00:23 localhost dnsmasq[320444]: warning: no upstream servers configured Feb 23 05:00:23 localhost dnsmasq-dhcp[320444]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:00:23 localhost dnsmasq[320444]: read /var/lib/neutron/dhcp/e8ea045c-a16c-464d-96bc-2be2a05ee7cc/addn_hosts - 0 addresses Feb 23 05:00:23 localhost dnsmasq-dhcp[320444]: read /var/lib/neutron/dhcp/e8ea045c-a16c-464d-96bc-2be2a05ee7cc/host Feb 23 05:00:23 localhost dnsmasq-dhcp[320444]: read /var/lib/neutron/dhcp/e8ea045c-a16c-464d-96bc-2be2a05ee7cc/opts Feb 23 05:00:23 localhost nova_compute[280321]: 2026-02-23 10:00:23.587 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:23 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee848905-323b-4447-944b-9bd735c2e380", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:00:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:23 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:23.772 263679 INFO neutron.agent.dhcp.agent [None req-f7a4eed9-6197-4beb-8f5e-c50aea28ae8a - - - - - -] DHCP configuration for ports {'1c2a776a-5f16-4d62-a02f-1546325c1e41'} is completed#033[00m Feb 23 05:00:23 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee848905-323b-4447-944b-9bd735c2e380/.meta.tmp' Feb 23 05:00:23 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee848905-323b-4447-944b-9bd735c2e380/.meta.tmp' to config b'/volumes/_nogroup/ee848905-323b-4447-944b-9bd735c2e380/.meta' Feb 23 05:00:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:23 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee848905-323b-4447-944b-9bd735c2e380", "format": "json"}]: dispatch Feb 23 05:00:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:23 localhost nova_compute[280321]: 2026-02-23 10:00:23.913 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:00:23 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 05:00:23 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 05:00:23 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 05:00:23 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:00:23 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 05:00:23 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 59cf342d-cbe9-4bce-93e4-ec09fd7496cc (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:00:23 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 59cf342d-cbe9-4bce-93e4-ec09fd7496cc (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:00:23 localhost ceph-mgr[285904]: [progress INFO root] Completed event 59cf342d-cbe9-4bce-93e4-ec09fd7496cc (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 05:00:23 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 05:00:23 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 05:00:23 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fcbab7a3-8c51-4c59-a723-d901625bf91c", "format": "json"}]: dispatch Feb 23 05:00:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fcbab7a3-8c51-4c59-a723-d901625bf91c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fcbab7a3-8c51-4c59-a723-d901625bf91c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:24 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:24.005+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fcbab7a3-8c51-4c59-a723-d901625bf91c' of type subvolume Feb 23 05:00:24 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fcbab7a3-8c51-4c59-a723-d901625bf91c' of type subvolume Feb 23 05:00:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fcbab7a3-8c51-4c59-a723-d901625bf91c", "force": true, "format": "json"}]: dispatch Feb 23 05:00:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fcbab7a3-8c51-4c59-a723-d901625bf91c, vol_name:cephfs) < "" Feb 23 05:00:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fcbab7a3-8c51-4c59-a723-d901625bf91c'' moved to trashcan Feb 23 05:00:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:00:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fcbab7a3-8c51-4c59-a723-d901625bf91c, vol_name:cephfs) < "" Feb 23 05:00:24 localhost nova_compute[280321]: 2026-02-23 10:00:24.163 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:24 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:00:24 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:00:24 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e183 e183: 6 total, 6 up, 6 in Feb 23 05:00:24 localhost ovn_controller[155966]: 2026-02-23T10:00:24Z|00325|ovn_bfd|INFO|Disabled BFD on interface ovn-5b0126-0 Feb 23 05:00:24 localhost ovn_controller[155966]: 2026-02-23T10:00:24Z|00326|ovn_bfd|INFO|Disabled BFD on interface ovn-585d62-0 Feb 23 05:00:24 localhost ovn_controller[155966]: 2026-02-23T10:00:24Z|00327|ovn_bfd|INFO|Disabled BFD on interface ovn-b9c72d-0 Feb 23 05:00:24 localhost nova_compute[280321]: 2026-02-23 10:00:24.341 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:24 localhost nova_compute[280321]: 2026-02-23 10:00:24.343 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:24 localhost nova_compute[280321]: 2026-02-23 10:00:24.365 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:24 localhost ovn_controller[155966]: 2026-02-23T10:00:24Z|00328|binding|INFO|Removing iface tapd0f4560b-25 ovn-installed in OVS Feb 23 05:00:24 localhost ovn_controller[155966]: 2026-02-23T10:00:24Z|00329|binding|INFO|Removing lport d0f4560b-25bf-480a-ae13-0d272b3e40b7 ovn-installed in OVS Feb 23 05:00:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:24.459 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 8c1ad7f5-4473-486b-bc65-095ab834d3ad with type ""#033[00m Feb 23 05:00:24 localhost nova_compute[280321]: 2026-02-23 10:00:24.460 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:24.461 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e8ea045c-a16c-464d-96bc-2be2a05ee7cc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8ea045c-a16c-464d-96bc-2be2a05ee7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51dc993993124a3f926af711e8b0f088', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b594cd25-c249-43a7-a2e2-d4efb5cabef9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d0f4560b-25bf-480a-ae13-0d272b3e40b7) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:24.463 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d0f4560b-25bf-480a-ae13-0d272b3e40b7 in datapath e8ea045c-a16c-464d-96bc-2be2a05ee7cc unbound from our chassis#033[00m Feb 23 05:00:24 localhost nova_compute[280321]: 2026-02-23 10:00:24.466 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:24.466 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e8ea045c-a16c-464d-96bc-2be2a05ee7cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:00:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:24.467 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9f30fc-7855-4812-b35b-78c85675ba82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:24 localhost nova_compute[280321]: 2026-02-23 10:00:24.471 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:24 localhost kernel: device tapd0f4560b-25 left promiscuous mode Feb 23 05:00:24 localhost nova_compute[280321]: 2026-02-23 10:00:24.483 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:24 localhost dnsmasq[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/addn_hosts - 0 addresses Feb 23 05:00:24 localhost dnsmasq-dhcp[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/host Feb 23 05:00:24 localhost dnsmasq-dhcp[318520]: read /var/lib/neutron/dhcp/78d7d8f3-7640-449e-aad8-f8bfcbb5961c/opts Feb 23 05:00:24 localhost podman[320496]: 2026-02-23 10:00:24.491497217 +0000 UTC m=+0.085652077 container kill f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78d7d8f3-7640-449e-aad8-f8bfcbb5961c, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:00:24 localhost nova_compute[280321]: 2026-02-23 10:00:24.697 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:24 localhost ovn_controller[155966]: 2026-02-23T10:00:24Z|00330|binding|INFO|Releasing lport d5d59ef1-4c90-48c3-ad6a-ce85e8cec53f from this chassis (sb_readonly=0) Feb 23 05:00:24 localhost ovn_controller[155966]: 2026-02-23T10:00:24Z|00331|binding|INFO|Setting lport d5d59ef1-4c90-48c3-ad6a-ce85e8cec53f down in Southbound Feb 23 05:00:24 localhost kernel: device tapd5d59ef1-4c left promiscuous mode Feb 23 05:00:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:24.707 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-78d7d8f3-7640-449e-aad8-f8bfcbb5961c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78d7d8f3-7640-449e-aad8-f8bfcbb5961c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba877496ef70493683c3a5d3962fd41b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92e880b0-a8f3-47c1-a2b9-7c522c761379, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5d59ef1-4c90-48c3-ad6a-ce85e8cec53f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:24.709 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d5d59ef1-4c90-48c3-ad6a-ce85e8cec53f in datapath 78d7d8f3-7640-449e-aad8-f8bfcbb5961c unbound from our chassis#033[00m Feb 23 05:00:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:24.712 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 78d7d8f3-7640-449e-aad8-f8bfcbb5961c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:00:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:24.713 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[68bccab9-74cb-4bd2-a4d2-cf9be1e4805f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:24 localhost nova_compute[280321]: 2026-02-23 10:00:24.724 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:24 localhost dnsmasq[320444]: read /var/lib/neutron/dhcp/e8ea045c-a16c-464d-96bc-2be2a05ee7cc/addn_hosts - 0 addresses Feb 23 05:00:24 localhost podman[320535]: 2026-02-23 10:00:24.985497152 +0000 UTC m=+0.046497281 container kill 5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e8ea045c-a16c-464d-96bc-2be2a05ee7cc, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2) Feb 23 05:00:24 localhost dnsmasq-dhcp[320444]: read /var/lib/neutron/dhcp/e8ea045c-a16c-464d-96bc-2be2a05ee7cc/host Feb 23 05:00:24 localhost dnsmasq-dhcp[320444]: read /var/lib/neutron/dhcp/e8ea045c-a16c-464d-96bc-2be2a05ee7cc/opts Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent [None req-1cc85c64-c483-428b-a310-419312964dce - - - - - -] Unable to reload_allocations dhcp for e8ea045c-a16c-464d-96bc-2be2a05ee7cc.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapd0f4560b-25 not found in namespace qdhcp-e8ea045c-a16c-464d-96bc-2be2a05ee7cc. Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent return fut.result() Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent return self.__get_result() Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent raise self._exception Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapd0f4560b-25 not found in namespace qdhcp-e8ea045c-a16c-464d-96bc-2be2a05ee7cc. Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.020 263679 ERROR neutron.agent.dhcp.agent #033[00m Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.023 263679 INFO neutron.agent.dhcp.agent [None req-a0cbd40c-2813-4e06-8b27-950ce67716f4 - - - - - -] Synchronizing state#033[00m Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.146 263679 INFO neutron.agent.dhcp.agent [None req-4680003d-8046-4efb-839f-b5eb936a1629 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.147 263679 INFO neutron.agent.dhcp.agent [-] Starting network e8ea045c-a16c-464d-96bc-2be2a05ee7cc dhcp configuration#033[00m Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.148 263679 INFO neutron.agent.dhcp.agent [-] Finished network e8ea045c-a16c-464d-96bc-2be2a05ee7cc dhcp configuration#033[00m Feb 23 05:00:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:25.148 263679 INFO neutron.agent.dhcp.agent [None req-4680003d-8046-4efb-839f-b5eb936a1629 - - - - - -] Synchronizing state complete#033[00m Feb 23 05:00:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 640 MiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 134 KiB/s rd, 6.0 MiB/s wr, 202 op/s Feb 23 05:00:25 localhost nova_compute[280321]: 2026-02-23 10:00:25.275 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e184 e184: 6 total, 6 up, 6 in Feb 23 05:00:25 localhost dnsmasq[320444]: exiting on receipt of SIGTERM Feb 23 05:00:25 localhost podman[320567]: 2026-02-23 10:00:25.373366216 +0000 UTC m=+0.061326423 container kill 5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e8ea045c-a16c-464d-96bc-2be2a05ee7cc, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:00:25 localhost systemd[1]: libpod-5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52.scope: Deactivated successfully. Feb 23 05:00:25 localhost podman[320579]: 2026-02-23 10:00:25.435681109 +0000 UTC m=+0.052676080 container died 5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e8ea045c-a16c-464d-96bc-2be2a05ee7cc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 05:00:25 localhost sshd[320605]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:00:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:00:25 localhost podman[320579]: 2026-02-23 10:00:25.47598734 +0000 UTC m=+0.092982271 container cleanup 5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e8ea045c-a16c-464d-96bc-2be2a05ee7cc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0) Feb 23 05:00:25 localhost systemd[1]: libpod-conmon-5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52.scope: Deactivated successfully. Feb 23 05:00:25 localhost systemd[1]: var-lib-containers-storage-overlay-f07b2c224da54135c9c2f749293ccb29e3fb18092b407e9af888c9af5d4ca809-merged.mount: Deactivated successfully. Feb 23 05:00:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52-userdata-shm.mount: Deactivated successfully. Feb 23 05:00:25 localhost podman[320588]: 2026-02-23 10:00:25.501115117 +0000 UTC m=+0.102024286 container remove 5ce49e1200661f95e72124c42b9ae508fc9050018409ea7ee5c4c76891a02b52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e8ea045c-a16c-464d-96bc-2be2a05ee7cc, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 23 05:00:25 localhost systemd[1]: run-netns-qdhcp\x2de8ea045c\x2da16c\x2d464d\x2d96bc\x2d2be2a05ee7cc.mount: Deactivated successfully. Feb 23 05:00:25 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 05:00:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 05:00:25 localhost nova_compute[280321]: 2026-02-23 10:00:25.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:00:25 localhost nova_compute[280321]: 2026-02-23 10:00:25.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 05:00:25 localhost sshd[320610]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:00:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:00:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e185 e185: 6 total, 6 up, 6 in Feb 23 05:00:26 localhost podman[320629]: 2026-02-23 10:00:26.716746588 +0000 UTC m=+0.064821670 container kill f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78d7d8f3-7640-449e-aad8-f8bfcbb5961c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 05:00:26 localhost dnsmasq[318520]: exiting on receipt of SIGTERM Feb 23 05:00:26 localhost systemd[1]: libpod-f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817.scope: Deactivated successfully. Feb 23 05:00:26 localhost podman[320645]: 2026-02-23 10:00:26.76985598 +0000 UTC m=+0.035922438 container died f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78d7d8f3-7640-449e-aad8-f8bfcbb5961c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:00:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817-userdata-shm.mount: Deactivated successfully. Feb 23 05:00:26 localhost systemd[1]: var-lib-containers-storage-overlay-5139cbb9da269422310239bb4b5543eb28661b00564296292bfa2e92fa0bc869-merged.mount: Deactivated successfully. Feb 23 05:00:26 localhost podman[320645]: 2026-02-23 10:00:26.806177869 +0000 UTC m=+0.072244317 container remove f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78d7d8f3-7640-449e-aad8-f8bfcbb5961c, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:00:26 localhost systemd[1]: libpod-conmon-f2267b213c723c8700666973d0d9b5c11c512722f5874bf4638f0e302fbce817.scope: Deactivated successfully. Feb 23 05:00:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ca4f151e-2925-40b3-9d6e-b36295484ff5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:00:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ca4f151e-2925-40b3-9d6e-b36295484ff5, vol_name:cephfs) < "" Feb 23 05:00:26 localhost systemd[1]: run-netns-qdhcp\x2d78d7d8f3\x2d7640\x2d449e\x2daad8\x2df8bfcbb5961c.mount: Deactivated successfully. Feb 23 05:00:26 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:26.860 263679 INFO neutron.agent.dhcp.agent [None req-45c6bd60-8708-44a5-bdf0-478e3bb0dd51 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.888 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.890 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.891 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.891 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 05:00:26 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ca4f151e-2925-40b3-9d6e-b36295484ff5/.meta.tmp' Feb 23 05:00:26 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ca4f151e-2925-40b3-9d6e-b36295484ff5/.meta.tmp' to config b'/volumes/_nogroup/ca4f151e-2925-40b3-9d6e-b36295484ff5/.meta' Feb 23 05:00:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ca4f151e-2925-40b3-9d6e-b36295484ff5, vol_name:cephfs) < "" Feb 23 05:00:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ca4f151e-2925-40b3-9d6e-b36295484ff5", "format": "json"}]: dispatch Feb 23 05:00:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ca4f151e-2925-40b3-9d6e-b36295484ff5, vol_name:cephfs) < "" Feb 23 05:00:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ca4f151e-2925-40b3-9d6e-b36295484ff5, vol_name:cephfs) < "" Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.912 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.913 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.914 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.934 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.935 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.935 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.936 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 05:00:26 localhost nova_compute[280321]: 2026-02-23 10:00:26.936 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:00:27 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:27.104 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "40524a29-3861-4339-95ff-304a6d8eab80", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:00:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:40524a29-3861-4339-95ff-304a6d8eab80, vol_name:cephfs) < "" Feb 23 05:00:27 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/40524a29-3861-4339-95ff-304a6d8eab80/.meta.tmp' Feb 23 05:00:27 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/40524a29-3861-4339-95ff-304a6d8eab80/.meta.tmp' to config b'/volumes/_nogroup/40524a29-3861-4339-95ff-304a6d8eab80/.meta' Feb 23 05:00:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:40524a29-3861-4339-95ff-304a6d8eab80, vol_name:cephfs) < "" Feb 23 05:00:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "40524a29-3861-4339-95ff-304a6d8eab80", "format": "json"}]: dispatch Feb 23 05:00:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:40524a29-3861-4339-95ff-304a6d8eab80, vol_name:cephfs) < "" Feb 23 05:00:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:40524a29-3861-4339-95ff-304a6d8eab80, vol_name:cephfs) < "" Feb 23 05:00:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 192 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 229 KiB/s rd, 23 KiB/s wr, 330 op/s Feb 23 05:00:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:00:27 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1851345063' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:00:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:00:27 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1851345063' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:00:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ee848905-323b-4447-944b-9bd735c2e380", "snap_name": "89eb0a17-32bc-40f6-a334-3962941ec616", "format": "json"}]: dispatch Feb 23 05:00:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:89eb0a17-32bc-40f6-a334-3962941ec616, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:89eb0a17-32bc-40f6-a334-3962941ec616, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:00:27 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2503795885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:00:27 localhost nova_compute[280321]: 2026-02-23 10:00:27.398 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:00:27 localhost nova_compute[280321]: 2026-02-23 10:00:27.593 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 05:00:27 localhost nova_compute[280321]: 2026-02-23 10:00:27.595 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11624MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 05:00:27 localhost nova_compute[280321]: 2026-02-23 10:00:27.596 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:00:27 localhost nova_compute[280321]: 2026-02-23 10:00:27.596 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:00:27 localhost nova_compute[280321]: 2026-02-23 10:00:27.674 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 05:00:27 localhost nova_compute[280321]: 2026-02-23 10:00:27.675 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 05:00:27 localhost nova_compute[280321]: 2026-02-23 10:00:27.694 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:00:28 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:00:28 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1243287233' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:00:28 localhost nova_compute[280321]: 2026-02-23 10:00:28.127 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:00:28 localhost nova_compute[280321]: 2026-02-23 10:00:28.134 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 05:00:28 localhost nova_compute[280321]: 2026-02-23 10:00:28.151 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 05:00:28 localhost nova_compute[280321]: 2026-02-23 10:00:28.176 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 05:00:28 localhost nova_compute[280321]: 2026-02-23 10:00:28.177 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:00:28 localhost nova_compute[280321]: 2026-02-23 10:00:28.590 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:29 localhost nova_compute[280321]: 2026-02-23 10:00:29.156 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:00:29 localhost nova_compute[280321]: 2026-02-23 10:00:29.157 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:00:29 localhost nova_compute[280321]: 2026-02-23 10:00:29.164 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 192 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 155 KiB/s rd, 16 KiB/s wr, 222 op/s Feb 23 05:00:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:00:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ca4f151e-2925-40b3-9d6e-b36295484ff5", "format": "json"}]: dispatch Feb 23 05:00:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ca4f151e-2925-40b3-9d6e-b36295484ff5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ca4f151e-2925-40b3-9d6e-b36295484ff5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:30 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:30.566+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ca4f151e-2925-40b3-9d6e-b36295484ff5' of type subvolume Feb 23 05:00:30 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ca4f151e-2925-40b3-9d6e-b36295484ff5' of type subvolume Feb 23 05:00:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "40524a29-3861-4339-95ff-304a6d8eab80", "new_size": 2147483648, "format": "json"}]: dispatch Feb 23 05:00:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:40524a29-3861-4339-95ff-304a6d8eab80, vol_name:cephfs) < "" Feb 23 05:00:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:40524a29-3861-4339-95ff-304a6d8eab80, vol_name:cephfs) < "" Feb 23 05:00:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ca4f151e-2925-40b3-9d6e-b36295484ff5", "force": true, "format": "json"}]: dispatch Feb 23 05:00:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ca4f151e-2925-40b3-9d6e-b36295484ff5, vol_name:cephfs) < "" Feb 23 05:00:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ca4f151e-2925-40b3-9d6e-b36295484ff5'' moved to trashcan Feb 23 05:00:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:00:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ca4f151e-2925-40b3-9d6e-b36295484ff5, vol_name:cephfs) < "" Feb 23 05:00:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 192 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 168 KiB/s rd, 30 KiB/s wr, 242 op/s Feb 23 05:00:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e186 e186: 6 total, 6 up, 6 in Feb 23 05:00:32 localhost openstack_network_exporter[243519]: ERROR 10:00:32 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:00:32 localhost openstack_network_exporter[243519]: Feb 23 05:00:32 localhost openstack_network_exporter[243519]: ERROR 10:00:32 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:00:32 localhost openstack_network_exporter[243519]: Feb 23 05:00:32 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e187 e187: 6 total, 6 up, 6 in Feb 23 05:00:32 localhost nova_compute[280321]: 2026-02-23 10:00:32.895 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:00:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 192 MiB data, 936 MiB used, 41 GiB / 42 GiB avail; 132 KiB/s rd, 22 KiB/s wr, 179 op/s Feb 23 05:00:33 localhost nova_compute[280321]: 2026-02-23 10:00:33.631 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:33 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e188 e188: 6 total, 6 up, 6 in Feb 23 05:00:33 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "40524a29-3861-4339-95ff-304a6d8eab80", "format": "json"}]: dispatch Feb 23 05:00:33 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:40524a29-3861-4339-95ff-304a6d8eab80, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:33 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:40524a29-3861-4339-95ff-304a6d8eab80, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:33 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:33.993+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '40524a29-3861-4339-95ff-304a6d8eab80' of type subvolume Feb 23 05:00:33 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '40524a29-3861-4339-95ff-304a6d8eab80' of type subvolume Feb 23 05:00:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "40524a29-3861-4339-95ff-304a6d8eab80", "force": true, "format": "json"}]: dispatch Feb 23 05:00:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:40524a29-3861-4339-95ff-304a6d8eab80, vol_name:cephfs) < "" Feb 23 05:00:34 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/40524a29-3861-4339-95ff-304a6d8eab80'' moved to trashcan Feb 23 05:00:34 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:00:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:40524a29-3861-4339-95ff-304a6d8eab80, vol_name:cephfs) < "" Feb 23 05:00:34 localhost nova_compute[280321]: 2026-02-23 10:00:34.166 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:00:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:00:35 localhost podman[320715]: 2026-02-23 10:00:35.013901284 +0000 UTC m=+0.083422048 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 05:00:35 localhost podman[320715]: 2026-02-23 10:00:35.022136576 +0000 UTC m=+0.091657350 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 05:00:35 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:00:35 localhost podman[320716]: 2026-02-23 10:00:35.069045787 +0000 UTC m=+0.134786716 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.7, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 23 05:00:35 localhost podman[320716]: 2026-02-23 10:00:35.080579841 +0000 UTC m=+0.146320800 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, config_id=openstack_network_exporter, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7) Feb 23 05:00:35 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:00:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:00:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:00:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:00:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:00:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:00:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:00:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v390: 177 pgs: 177 active+clean; 192 MiB data, 936 MiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 21 KiB/s wr, 72 op/s Feb 23 05:00:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:00:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 223 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 2.7 MiB/s wr, 143 op/s Feb 23 05:00:38 localhost nova_compute[280321]: 2026-02-23 10:00:38.659 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:39 localhost nova_compute[280321]: 2026-02-23 10:00:39.169 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 223 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 113 op/s Feb 23 05:00:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ee848905-323b-4447-944b-9bd735c2e380", "snap_name": "89eb0a17-32bc-40f6-a334-3962941ec616_e9d2bf1c-54a0-4805-9a00-108fd2939de1", "force": true, "format": "json"}]: dispatch Feb 23 05:00:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:89eb0a17-32bc-40f6-a334-3962941ec616_e9d2bf1c-54a0-4805-9a00-108fd2939de1, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee848905-323b-4447-944b-9bd735c2e380/.meta.tmp' Feb 23 05:00:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee848905-323b-4447-944b-9bd735c2e380/.meta.tmp' to config b'/volumes/_nogroup/ee848905-323b-4447-944b-9bd735c2e380/.meta' Feb 23 05:00:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:89eb0a17-32bc-40f6-a334-3962941ec616_e9d2bf1c-54a0-4805-9a00-108fd2939de1, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ee848905-323b-4447-944b-9bd735c2e380", "snap_name": "89eb0a17-32bc-40f6-a334-3962941ec616", "force": true, "format": "json"}]: dispatch Feb 23 05:00:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:89eb0a17-32bc-40f6-a334-3962941ec616, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:00:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee848905-323b-4447-944b-9bd735c2e380/.meta.tmp' Feb 23 05:00:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee848905-323b-4447-944b-9bd735c2e380/.meta.tmp' to config b'/volumes/_nogroup/ee848905-323b-4447-944b-9bd735c2e380/.meta' Feb 23 05:00:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:89eb0a17-32bc-40f6-a334-3962941ec616, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:39 localhost podman[320758]: 2026-02-23 10:00:39.982211337 +0000 UTC m=+0.061107457 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:00:40 localhost podman[320758]: 2026-02-23 10:00:40.043095776 +0000 UTC m=+0.121991856 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260216, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:00:40 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:00:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:00:40 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "99b5759b-6bd3-4969-8e77-f18612d84802", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:00:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:99b5759b-6bd3-4969-8e77-f18612d84802, vol_name:cephfs) < "" Feb 23 05:00:40 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/99b5759b-6bd3-4969-8e77-f18612d84802/.meta.tmp' Feb 23 05:00:40 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/99b5759b-6bd3-4969-8e77-f18612d84802/.meta.tmp' to config b'/volumes/_nogroup/99b5759b-6bd3-4969-8e77-f18612d84802/.meta' Feb 23 05:00:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:99b5759b-6bd3-4969-8e77-f18612d84802, vol_name:cephfs) < "" Feb 23 05:00:40 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "99b5759b-6bd3-4969-8e77-f18612d84802", "format": "json"}]: dispatch Feb 23 05:00:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:99b5759b-6bd3-4969-8e77-f18612d84802, vol_name:cephfs) < "" Feb 23 05:00:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:99b5759b-6bd3-4969-8e77-f18612d84802, vol_name:cephfs) < "" Feb 23 05:00:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 238 MiB data, 985 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 101 op/s Feb 23 05:00:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e189 e189: 6 total, 6 up, 6 in Feb 23 05:00:42 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e190 e190: 6 total, 6 up, 6 in Feb 23 05:00:42 localhost podman[241086]: time="2026-02-23T10:00:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:00:42 localhost podman[241086]: @ - - [23/Feb/2026:10:00:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:00:42 localhost podman[241086]: @ - - [23/Feb/2026:10:00:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17824 "" "Go-http-client/1.1" Feb 23 05:00:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 238 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 131 op/s Feb 23 05:00:43 localhost nova_compute[280321]: 2026-02-23 10:00:43.687 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:44 localhost nova_compute[280321]: 2026-02-23 10:00:44.171 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee848905-323b-4447-944b-9bd735c2e380", "format": "json"}]: dispatch Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee848905-323b-4447-944b-9bd735c2e380, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee848905-323b-4447-944b-9bd735c2e380, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:44.482+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee848905-323b-4447-944b-9bd735c2e380' of type subvolume Feb 23 05:00:44 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee848905-323b-4447-944b-9bd735c2e380' of type subvolume Feb 23 05:00:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee848905-323b-4447-944b-9bd735c2e380", "force": true, "format": "json"}]: dispatch Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:44 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e191 e191: 6 total, 6 up, 6 in Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee848905-323b-4447-944b-9bd735c2e380'' moved to trashcan Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee848905-323b-4447-944b-9bd735c2e380, vol_name:cephfs) < "" Feb 23 05:00:44 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:00:44 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3423787226' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:00:44 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:00:44 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3423787226' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:00:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "99b5759b-6bd3-4969-8e77-f18612d84802", "format": "json"}]: dispatch Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:99b5759b-6bd3-4969-8e77-f18612d84802, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:99b5759b-6bd3-4969-8e77-f18612d84802, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:44 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '99b5759b-6bd3-4969-8e77-f18612d84802' of type subvolume Feb 23 05:00:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:44.730+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '99b5759b-6bd3-4969-8e77-f18612d84802' of type subvolume Feb 23 05:00:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "99b5759b-6bd3-4969-8e77-f18612d84802", "force": true, "format": "json"}]: dispatch Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:99b5759b-6bd3-4969-8e77-f18612d84802, vol_name:cephfs) < "" Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/99b5759b-6bd3-4969-8e77-f18612d84802'' moved to trashcan Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:00:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:99b5759b-6bd3-4969-8e77-f18612d84802, vol_name:cephfs) < "" Feb 23 05:00:45 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:45.223 2 INFO neutron.agent.securitygroups_rpc [None req-07956afb-903c-4340-9a87-60ef781dc496 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 238 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 889 KiB/s wr, 44 op/s Feb 23 05:00:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:00:45 localhost sshd[320783]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:00:46 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e192 e192: 6 total, 6 up, 6 in Feb 23 05:00:46 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:46.612 2 INFO neutron.agent.securitygroups_rpc [None req-8a14fab2-7a8a-4c43-8296-30b056f5b06d 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:46 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:46.805 2 INFO neutron.agent.securitygroups_rpc [None req-8a14fab2-7a8a-4c43-8296-30b056f5b06d 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:47 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:47.061 2 INFO neutron.agent.securitygroups_rpc [None req-9c6cef5b-9665-4a72-8732-114020aa83fb 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:47 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:47.078 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v400: 177 pgs: 177 active+clean; 192 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 138 KiB/s rd, 22 KiB/s wr, 193 op/s Feb 23 05:00:47 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:47.378 2 INFO neutron.agent.securitygroups_rpc [None req-0ce85d6b-73e9-4380-afd1-5f90a1fc10e8 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:47 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:47.395 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:47 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098", "format": "json"}]: dispatch Feb 23 05:00:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:47 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:47.784 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:00:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:00:48 localhost podman[320785]: 2026-02-23 10:00:48.005293063 +0000 UTC m=+0.082942153 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:00:48 localhost podman[320786]: 2026-02-23 10:00:48.067229074 +0000 UTC m=+0.142426520 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute) Feb 23 05:00:48 localhost podman[320786]: 2026-02-23 10:00:48.077597911 +0000 UTC m=+0.152795357 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 05:00:48 localhost podman[320785]: 2026-02-23 10:00:48.087928527 +0000 UTC m=+0.165577657 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260216) Feb 23 05:00:48 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:00:48 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:00:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:48.316 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:00:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:48.316 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:00:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:48.316 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:48 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:48.412+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098' of type subvolume Feb 23 05:00:48 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098' of type subvolume Feb 23 05:00:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098", "force": true, "format": "json"}]: dispatch Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098, vol_name:cephfs) < "" Feb 23 05:00:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e193 e193: 6 total, 6 up, 6 in Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098'' moved to trashcan Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee02e5c3-d4ec-4cf3-86f8-0c8d48f0b098, vol_name:cephfs) < "" Feb 23 05:00:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "066cedc7-9ad0-458f-a1e2-21940628c140", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:066cedc7-9ad0-458f-a1e2-21940628c140, vol_name:cephfs) < "" Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/066cedc7-9ad0-458f-a1e2-21940628c140/.meta.tmp' Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/066cedc7-9ad0-458f-a1e2-21940628c140/.meta.tmp' to config b'/volumes/_nogroup/066cedc7-9ad0-458f-a1e2-21940628c140/.meta' Feb 23 05:00:48 localhost nova_compute[280321]: 2026-02-23 10:00:48.738 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:066cedc7-9ad0-458f-a1e2-21940628c140, vol_name:cephfs) < "" Feb 23 05:00:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "066cedc7-9ad0-458f-a1e2-21940628c140", "format": "json"}]: dispatch Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:066cedc7-9ad0-458f-a1e2-21940628c140, vol_name:cephfs) < "" Feb 23 05:00:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:066cedc7-9ad0-458f-a1e2-21940628c140, vol_name:cephfs) < "" Feb 23 05:00:49 localhost nova_compute[280321]: 2026-02-23 10:00:49.173 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 192 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 116 KiB/s rd, 19 KiB/s wr, 164 op/s Feb 23 05:00:49 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:49.480 2 INFO neutron.agent.securitygroups_rpc [None req-81c2d0d8-1692-4b29-b62c-eb471980e5c6 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:49 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e194 e194: 6 total, 6 up, 6 in Feb 23 05:00:49 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:49.708 263679 INFO neutron.agent.linux.ip_lib [None req-06fc6622-8fae-4703-9dd1-5a5b65b1b070 - - - - - -] Device tapc7960e12-bf cannot be used as it has no MAC address#033[00m Feb 23 05:00:49 localhost nova_compute[280321]: 2026-02-23 10:00:49.728 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:49 localhost kernel: device tapc7960e12-bf entered promiscuous mode Feb 23 05:00:49 localhost nova_compute[280321]: 2026-02-23 10:00:49.737 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:49 localhost ovn_controller[155966]: 2026-02-23T10:00:49Z|00332|binding|INFO|Claiming lport c7960e12-bf06-4d38-9e6e-03d6bae3a6da for this chassis. Feb 23 05:00:49 localhost ovn_controller[155966]: 2026-02-23T10:00:49Z|00333|binding|INFO|c7960e12-bf06-4d38-9e6e-03d6bae3a6da: Claiming unknown Feb 23 05:00:49 localhost NetworkManager[5987]: [1771840849.7404] manager: (tapc7960e12-bf): new Generic device (/org/freedesktop/NetworkManager/Devices/60) Feb 23 05:00:49 localhost systemd-udevd[320835]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:00:49 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:49.756 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-143735a7-cb56-4a5e-aea5-85aa0c16f2df', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-143735a7-cb56-4a5e-aea5-85aa0c16f2df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90343b3c0ce240adab2c21e5c92b6952', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96c6d18f-1393-421d-b6c0-0858ba2ab473, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c7960e12-bf06-4d38-9e6e-03d6bae3a6da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:49 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:49.758 161842 INFO neutron.agent.ovn.metadata.agent [-] Port c7960e12-bf06-4d38-9e6e-03d6bae3a6da in datapath 143735a7-cb56-4a5e-aea5-85aa0c16f2df bound to our chassis#033[00m Feb 23 05:00:49 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:49.760 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 143735a7-cb56-4a5e-aea5-85aa0c16f2df or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:00:49 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:49.762 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[46b43a6e-584a-47c2-88a4-e7ea67c759f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:49 localhost journal[229268]: ethtool ioctl error on tapc7960e12-bf: No such device Feb 23 05:00:49 localhost journal[229268]: ethtool ioctl error on tapc7960e12-bf: No such device Feb 23 05:00:49 localhost journal[229268]: ethtool ioctl error on tapc7960e12-bf: No such device Feb 23 05:00:49 localhost journal[229268]: ethtool ioctl error on tapc7960e12-bf: No such device Feb 23 05:00:49 localhost ovn_controller[155966]: 2026-02-23T10:00:49Z|00334|binding|INFO|Setting lport c7960e12-bf06-4d38-9e6e-03d6bae3a6da ovn-installed in OVS Feb 23 05:00:49 localhost ovn_controller[155966]: 2026-02-23T10:00:49Z|00335|binding|INFO|Setting lport c7960e12-bf06-4d38-9e6e-03d6bae3a6da up in Southbound Feb 23 05:00:49 localhost journal[229268]: ethtool ioctl error on tapc7960e12-bf: No such device Feb 23 05:00:49 localhost journal[229268]: ethtool ioctl error on tapc7960e12-bf: No such device Feb 23 05:00:49 localhost nova_compute[280321]: 2026-02-23 10:00:49.844 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:49 localhost journal[229268]: ethtool ioctl error on tapc7960e12-bf: No such device Feb 23 05:00:49 localhost journal[229268]: ethtool ioctl error on tapc7960e12-bf: No such device Feb 23 05:00:49 localhost nova_compute[280321]: 2026-02-23 10:00:49.865 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:49 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:49.968 2 INFO neutron.agent.securitygroups_rpc [None req-502b98ac-7cd1-4314-8dc8-56f7a95bf1c0 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:00:50 localhost podman[320905]: Feb 23 05:00:50 localhost podman[320905]: 2026-02-23 10:00:50.577695545 +0000 UTC m=+0.078571020 container create 1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-143735a7-cb56-4a5e-aea5-85aa0c16f2df, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:00:50 localhost systemd[1]: Started libpod-conmon-1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4.scope. Feb 23 05:00:50 localhost ovn_controller[155966]: 2026-02-23T10:00:50Z|00336|binding|INFO|Removing iface tapc7960e12-bf ovn-installed in OVS Feb 23 05:00:50 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:50.617 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 1e2f358e-911d-46a9-b73c-d0d60c2497b7 with type ""#033[00m Feb 23 05:00:50 localhost ovn_controller[155966]: 2026-02-23T10:00:50Z|00337|binding|INFO|Removing lport c7960e12-bf06-4d38-9e6e-03d6bae3a6da ovn-installed in OVS Feb 23 05:00:50 localhost nova_compute[280321]: 2026-02-23 10:00:50.620 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:50 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:50.619 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-143735a7-cb56-4a5e-aea5-85aa0c16f2df', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-143735a7-cb56-4a5e-aea5-85aa0c16f2df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90343b3c0ce240adab2c21e5c92b6952', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96c6d18f-1393-421d-b6c0-0858ba2ab473, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c7960e12-bf06-4d38-9e6e-03d6bae3a6da) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:50 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:50.626 161842 INFO neutron.agent.ovn.metadata.agent [-] Port c7960e12-bf06-4d38-9e6e-03d6bae3a6da in datapath 143735a7-cb56-4a5e-aea5-85aa0c16f2df unbound from our chassis#033[00m Feb 23 05:00:50 localhost systemd[1]: tmp-crun.jx0FRY.mount: Deactivated successfully. Feb 23 05:00:50 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:50.628 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 143735a7-cb56-4a5e-aea5-85aa0c16f2df or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:00:50 localhost nova_compute[280321]: 2026-02-23 10:00:50.630 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:50 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:50.630 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[93311e1a-9791-43b5-a035-1850612779fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:00:50 localhost podman[320905]: 2026-02-23 10:00:50.53561016 +0000 UTC m=+0.036485665 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:00:50 localhost systemd[1]: Started libcrun container. Feb 23 05:00:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86cd7231d06a397948eec853b516c765ea546346964802d4496c354a10bdebd7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:00:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e195 e195: 6 total, 6 up, 6 in Feb 23 05:00:50 localhost podman[320905]: 2026-02-23 10:00:50.668444547 +0000 UTC m=+0.169320022 container init 1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-143735a7-cb56-4a5e-aea5-85aa0c16f2df, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0) Feb 23 05:00:50 localhost podman[320905]: 2026-02-23 10:00:50.683095604 +0000 UTC m=+0.183971089 container start 1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-143735a7-cb56-4a5e-aea5-85aa0c16f2df, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 05:00:50 localhost dnsmasq[320924]: started, version 2.85 cachesize 150 Feb 23 05:00:50 localhost dnsmasq[320924]: DNS service limited to local subnets Feb 23 05:00:50 localhost dnsmasq[320924]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:00:50 localhost dnsmasq[320924]: warning: no upstream servers configured Feb 23 05:00:50 localhost dnsmasq-dhcp[320924]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 05:00:50 localhost dnsmasq[320924]: read /var/lib/neutron/dhcp/143735a7-cb56-4a5e-aea5-85aa0c16f2df/addn_hosts - 0 addresses Feb 23 05:00:50 localhost dnsmasq-dhcp[320924]: read /var/lib/neutron/dhcp/143735a7-cb56-4a5e-aea5-85aa0c16f2df/host Feb 23 05:00:50 localhost dnsmasq-dhcp[320924]: read /var/lib/neutron/dhcp/143735a7-cb56-4a5e-aea5-85aa0c16f2df/opts Feb 23 05:00:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:50.857 263679 INFO neutron.agent.dhcp.agent [None req-65610428-a725-4426-a2ae-ef078bb260ba - - - - - -] DHCP configuration for ports {'7629dc0e-ab77-4b1c-a561-728339468ffb'} is completed#033[00m Feb 23 05:00:51 localhost podman[320942]: 2026-02-23 10:00:51.003153317 +0000 UTC m=+0.064034256 container kill 1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-143735a7-cb56-4a5e-aea5-85aa0c16f2df, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 05:00:51 localhost dnsmasq[320924]: exiting on receipt of SIGTERM Feb 23 05:00:51 localhost systemd[1]: libpod-1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4.scope: Deactivated successfully. Feb 23 05:00:51 localhost nova_compute[280321]: 2026-02-23 10:00:51.074 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:51 localhost podman[320955]: 2026-02-23 10:00:51.084000766 +0000 UTC m=+0.060301662 container died 1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-143735a7-cb56-4a5e-aea5-85aa0c16f2df, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 05:00:51 localhost podman[320955]: 2026-02-23 10:00:51.125965808 +0000 UTC m=+0.102266614 container cleanup 1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-143735a7-cb56-4a5e-aea5-85aa0c16f2df, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216) Feb 23 05:00:51 localhost systemd[1]: libpod-conmon-1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4.scope: Deactivated successfully. Feb 23 05:00:51 localhost podman[320956]: 2026-02-23 10:00:51.14698315 +0000 UTC m=+0.119823180 container remove 1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-143735a7-cb56-4a5e-aea5-85aa0c16f2df, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true) Feb 23 05:00:51 localhost nova_compute[280321]: 2026-02-23 10:00:51.159 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:51 localhost kernel: device tapc7960e12-bf left promiscuous mode Feb 23 05:00:51 localhost nova_compute[280321]: 2026-02-23 10:00:51.178 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:51 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:51.193 263679 INFO neutron.agent.dhcp.agent [None req-771b8802-6d52-45b6-ae93-efe21d49a1d2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:51 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:51.193 263679 INFO neutron.agent.dhcp.agent [None req-771b8802-6d52-45b6-ae93-efe21d49a1d2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:51 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:51.194 263679 INFO neutron.agent.dhcp.agent [None req-771b8802-6d52-45b6-ae93-efe21d49a1d2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 192 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 118 KiB/s rd, 26 KiB/s wr, 167 op/s Feb 23 05:00:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "066cedc7-9ad0-458f-a1e2-21940628c140", "format": "json"}]: dispatch Feb 23 05:00:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:066cedc7-9ad0-458f-a1e2-21940628c140, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:066cedc7-9ad0-458f-a1e2-21940628c140, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:00:51 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '066cedc7-9ad0-458f-a1e2-21940628c140' of type subvolume Feb 23 05:00:51 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:00:51.378+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '066cedc7-9ad0-458f-a1e2-21940628c140' of type subvolume Feb 23 05:00:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "066cedc7-9ad0-458f-a1e2-21940628c140", "force": true, "format": "json"}]: dispatch Feb 23 05:00:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:066cedc7-9ad0-458f-a1e2-21940628c140, vol_name:cephfs) < "" Feb 23 05:00:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/066cedc7-9ad0-458f-a1e2-21940628c140'' moved to trashcan Feb 23 05:00:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:00:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:066cedc7-9ad0-458f-a1e2-21940628c140, vol_name:cephfs) < "" Feb 23 05:00:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:00:51 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3338744072' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:00:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:00:51 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3338744072' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:00:51 localhost systemd[1]: tmp-crun.3UnT1L.mount: Deactivated successfully. Feb 23 05:00:51 localhost systemd[1]: var-lib-containers-storage-overlay-86cd7231d06a397948eec853b516c765ea546346964802d4496c354a10bdebd7-merged.mount: Deactivated successfully. Feb 23 05:00:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e03c9976321b71af4dbba45105bdac3efcbeadc5bd96e63b7a0fc910c9cdaa4-userdata-shm.mount: Deactivated successfully. Feb 23 05:00:51 localhost systemd[1]: run-netns-qdhcp\x2d143735a7\x2dcb56\x2d4a5e\x2daea5\x2d85aa0c16f2df.mount: Deactivated successfully. Feb 23 05:00:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e196 e196: 6 total, 6 up, 6 in Feb 23 05:00:52 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e197 e197: 6 total, 6 up, 6 in Feb 23 05:00:52 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:00:52 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2943719624' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:00:52 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:00:52 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2943719624' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:00:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:00:53 localhost podman[320984]: 2026-02-23 10:00:53.004817562 +0000 UTC m=+0.071849035 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 05:00:53 localhost podman[320984]: 2026-02-23 10:00:53.042021938 +0000 UTC m=+0.109053421 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:00:53 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:00:53 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:53.061 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:00:53 localhost ovn_metadata_agent[161837]: 2026-02-23 10:00:53.062 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 05:00:53 localhost nova_compute[280321]: 2026-02-23 10:00:53.094 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 192 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 181 KiB/s rd, 33 KiB/s wr, 252 op/s Feb 23 05:00:53 localhost nova_compute[280321]: 2026-02-23 10:00:53.739 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:53 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e198 e198: 6 total, 6 up, 6 in Feb 23 05:00:54 localhost nova_compute[280321]: 2026-02-23 10:00:54.202 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "0999e527-f1ec-432e-b9da-01ea97a863f0", "format": "json"}]: dispatch Feb 23 05:00:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0999e527-f1ec-432e-b9da-01ea97a863f0, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:00:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0999e527-f1ec-432e-b9da-01ea97a863f0, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:00:55 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:55.203 2 INFO neutron.agent.securitygroups_rpc [None req-572fb810-ab54-44db-ac0d-47376cf59f66 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 192 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 158 KiB/s rd, 29 KiB/s wr, 220 op/s Feb 23 05:00:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:00:55 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:55.535 2 INFO neutron.agent.securitygroups_rpc [None req-3cc00f74-b029-4176-993c-a46e50d01263 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:55 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:55.563 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:55 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:55.807 2 INFO neutron.agent.securitygroups_rpc [None req-252a7af6-1118-4c58-9e08-ff23cd8dfa0e 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e199 e199: 6 total, 6 up, 6 in Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.091 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:00:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:00:56 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:56.228 2 INFO neutron.agent.securitygroups_rpc [None req-a3f5d0f4-f2a7-4483-8e6e-6de10e55779f 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:56 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:56.248 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:56 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:56.569 2 INFO neutron.agent.securitygroups_rpc [None req-84c168ac-c1cc-4ad3-a429-c07c41b0ce10 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:57 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:57.207 2 INFO neutron.agent.securitygroups_rpc [None req-076cf072-6b56-42a9-a049-cb8eb1290757 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:00:57 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:00:57.219 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:00:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 192 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 4.0 MiB/s rd, 20 KiB/s wr, 311 op/s Feb 23 05:00:57 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e200 e200: 6 total, 6 up, 6 in Feb 23 05:00:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "0999e527-f1ec-432e-b9da-01ea97a863f0_a37ec8d1-9603-4bc2-b3c2-056d8fe1be15", "force": true, "format": "json"}]: dispatch Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0999e527-f1ec-432e-b9da-01ea97a863f0_a37ec8d1-9603-4bc2-b3c2-056d8fe1be15, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0999e527-f1ec-432e-b9da-01ea97a863f0_a37ec8d1-9603-4bc2-b3c2-056d8fe1be15, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:00:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "0999e527-f1ec-432e-b9da-01ea97a863f0", "force": true, "format": "json"}]: dispatch Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0999e527-f1ec-432e-b9da-01ea97a863f0, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0999e527-f1ec-432e-b9da-01ea97a863f0, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:00:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eaea33d2-5529-46ec-832a-9d7cb7e5b233", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaea33d2-5529-46ec-832a-9d7cb7e5b233/.meta.tmp' Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaea33d2-5529-46ec-832a-9d7cb7e5b233/.meta.tmp' to config b'/volumes/_nogroup/eaea33d2-5529-46ec-832a-9d7cb7e5b233/.meta' Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:00:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eaea33d2-5529-46ec-832a-9d7cb7e5b233", "format": "json"}]: dispatch Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:00:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:00:58 localhost nova_compute[280321]: 2026-02-23 10:00:58.778 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e201 e201: 6 total, 6 up, 6 in Feb 23 05:00:59 localhost nova_compute[280321]: 2026-02-23 10:00:59.203 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:00:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 192 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 4.0 MiB/s rd, 17 KiB/s wr, 225 op/s Feb 23 05:00:59 localhost neutron_sriov_agent[256355]: 2026-02-23 10:00:59.723 2 INFO neutron.agent.securitygroups_rpc [None req-ecc0bc70-4d61-4d71-86a4-16b51953c44e 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e202 e202: 6 total, 6 up, 6 in Feb 23 05:01:00 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:00.325 2 INFO neutron.agent.securitygroups_rpc [None req-0f7cdb10-6b15-41bf-967b-b0f718bce643 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:00 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:00.889 2 INFO neutron.agent.securitygroups_rpc [None req-9f100d5f-55ff-4c02-bab1-7dd2c773e5e4 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:01 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:01.064 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 05:01:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 218 MiB data, 994 MiB used, 41 GiB / 42 GiB avail; 4.1 MiB/s rd, 2.9 MiB/s wr, 372 op/s Feb 23 05:01:01 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:01.281 2 INFO neutron.agent.securitygroups_rpc [None req-3d8beedf-5b31-48e0-a2b2-815a1e0fad78 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:01:01 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/551893517' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:01:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:01:01 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/551893517' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:01:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eaea33d2-5529-46ec-832a-9d7cb7e5b233", "snap_name": "7b0236fc-1095-4ed0-a7af-e81e5f884e6a", "format": "json"}]: dispatch Feb 23 05:01:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7b0236fc-1095-4ed0-a7af-e81e5f884e6a, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:01:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7b0236fc-1095-4ed0-a7af-e81e5f884e6a, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:01:01 localhost openstack_network_exporter[243519]: ERROR 10:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:01:01 localhost openstack_network_exporter[243519]: Feb 23 05:01:01 localhost openstack_network_exporter[243519]: ERROR 10:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:01:01 localhost openstack_network_exporter[243519]: Feb 23 05:01:02 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e203 e203: 6 total, 6 up, 6 in Feb 23 05:01:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 239 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 148 KiB/s rd, 4.0 MiB/s wr, 216 op/s Feb 23 05:01:03 localhost sshd[321019]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:01:03 localhost nova_compute[280321]: 2026-02-23 10:01:03.782 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "a6576274-fdc3-4c38-8214-f23c6386f2cf", "format": "json"}]: dispatch Feb 23 05:01:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a6576274-fdc3-4c38-8214-f23c6386f2cf, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a6576274-fdc3-4c38-8214-f23c6386f2cf, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:01:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3818338506' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:01:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:01:04 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3818338506' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:01:04 localhost nova_compute[280321]: 2026-02-23 10:01:04.205 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:04 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:04.407 2 INFO neutron.agent.securitygroups_rpc [None req-8dbf60ea-0a4f-4b6f-8f3b-cd15a4e6bbb5 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:04.756 263679 INFO neutron.agent.linux.ip_lib [None req-c900068f-031e-4bea-9e93-7718c212a97e - - - - - -] Device tapf59f0d43-58 cannot be used as it has no MAC address#033[00m Feb 23 05:01:04 localhost nova_compute[280321]: 2026-02-23 10:01:04.825 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:04 localhost kernel: device tapf59f0d43-58 entered promiscuous mode Feb 23 05:01:04 localhost NetworkManager[5987]: [1771840864.8370] manager: (tapf59f0d43-58): new Generic device (/org/freedesktop/NetworkManager/Devices/61) Feb 23 05:01:04 localhost ovn_controller[155966]: 2026-02-23T10:01:04Z|00338|binding|INFO|Claiming lport f59f0d43-58b5-4a6b-a75b-838aeb42eb9b for this chassis. Feb 23 05:01:04 localhost ovn_controller[155966]: 2026-02-23T10:01:04Z|00339|binding|INFO|f59f0d43-58b5-4a6b-a75b-838aeb42eb9b: Claiming unknown Feb 23 05:01:04 localhost nova_compute[280321]: 2026-02-23 10:01:04.840 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:04 localhost systemd-udevd[321031]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:01:04 localhost nova_compute[280321]: 2026-02-23 10:01:04.845 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:04.852 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-5e6186cf-b9fd-470d-bf12-9d317641eb20', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e6186cf-b9fd-470d-bf12-9d317641eb20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90343b3c0ce240adab2c21e5c92b6952', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f744fad7-fba1-42c5-8ee5-3063b421eafb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f59f0d43-58b5-4a6b-a75b-838aeb42eb9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:01:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:04.854 161842 INFO neutron.agent.ovn.metadata.agent [-] Port f59f0d43-58b5-4a6b-a75b-838aeb42eb9b in datapath 5e6186cf-b9fd-470d-bf12-9d317641eb20 bound to our chassis#033[00m Feb 23 05:01:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:04.856 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5e6186cf-b9fd-470d-bf12-9d317641eb20 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:01:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:04.858 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[be70be3f-b2fc-49af-a479-9ab7260cd440]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:01:04 localhost journal[229268]: ethtool ioctl error on tapf59f0d43-58: No such device Feb 23 05:01:04 localhost journal[229268]: ethtool ioctl error on tapf59f0d43-58: No such device Feb 23 05:01:04 localhost journal[229268]: ethtool ioctl error on tapf59f0d43-58: No such device Feb 23 05:01:04 localhost journal[229268]: ethtool ioctl error on tapf59f0d43-58: No such device Feb 23 05:01:04 localhost journal[229268]: ethtool ioctl error on tapf59f0d43-58: No such device Feb 23 05:01:04 localhost ovn_controller[155966]: 2026-02-23T10:01:04Z|00340|binding|INFO|Setting lport f59f0d43-58b5-4a6b-a75b-838aeb42eb9b ovn-installed in OVS Feb 23 05:01:04 localhost ovn_controller[155966]: 2026-02-23T10:01:04Z|00341|binding|INFO|Setting lport f59f0d43-58b5-4a6b-a75b-838aeb42eb9b up in Southbound Feb 23 05:01:04 localhost nova_compute[280321]: 2026-02-23 10:01:04.895 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:04 localhost journal[229268]: ethtool ioctl error on tapf59f0d43-58: No such device Feb 23 05:01:04 localhost nova_compute[280321]: 2026-02-23 10:01:04.897 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:04 localhost journal[229268]: ethtool ioctl error on tapf59f0d43-58: No such device Feb 23 05:01:04 localhost journal[229268]: ethtool ioctl error on tapf59f0d43-58: No such device Feb 23 05:01:04 localhost nova_compute[280321]: 2026-02-23 10:01:04.915 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:04 localhost nova_compute[280321]: 2026-02-23 10:01:04.940 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_10:01:05 Feb 23 05:01:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 05:01:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 05:01:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['manila_metadata', 'manila_data', 'volumes', 'backups', 'images', '.mgr', 'vms'] Feb 23 05:01:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:01:05 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:05.237 2 INFO neutron.agent.securitygroups_rpc [None req-03b9fc8c-f7be-48ab-90b7-4895d3ad0871 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 239 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 125 KiB/s rd, 3.4 MiB/s wr, 182 op/s Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002969855062348781 of space, bias 1.0, pg target 0.5929810607823066 quantized to 32 (current 32) Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8570103846780196 quantized to 32 (current 32) Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.416259538432906e-05 quantized to 32 (current 32) Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010832519076865812 quantized to 32 (current 32) Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:01:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.598367997394379e-05 of space, bias 4.0, pg target 0.03654169768596067 quantized to 16 (current 16) Feb 23 05:01:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 05:01:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:01:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:01:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 05:01:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:01:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:01:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:01:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:01:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:01:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:01:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:05 localhost podman[321102]: Feb 23 05:01:05 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eaea33d2-5529-46ec-832a-9d7cb7e5b233", "snap_name": "7b0236fc-1095-4ed0-a7af-e81e5f884e6a_d3ad30ba-4758-4125-8037-7dc5cb3e89aa", "force": true, "format": "json"}]: dispatch Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7b0236fc-1095-4ed0-a7af-e81e5f884e6a_d3ad30ba-4758-4125-8037-7dc5cb3e89aa, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:01:05 localhost podman[321102]: 2026-02-23 10:01:05.745770906 +0000 UTC m=+0.088926267 container create fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e6186cf-b9fd-470d-bf12-9d317641eb20, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:01:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:01:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:01:05 localhost systemd[1]: Started libpod-conmon-fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359.scope. Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaea33d2-5529-46ec-832a-9d7cb7e5b233/.meta.tmp' Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaea33d2-5529-46ec-832a-9d7cb7e5b233/.meta.tmp' to config b'/volumes/_nogroup/eaea33d2-5529-46ec-832a-9d7cb7e5b233/.meta' Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7b0236fc-1095-4ed0-a7af-e81e5f884e6a_d3ad30ba-4758-4125-8037-7dc5cb3e89aa, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:01:05 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eaea33d2-5529-46ec-832a-9d7cb7e5b233", "snap_name": "7b0236fc-1095-4ed0-a7af-e81e5f884e6a", "force": true, "format": "json"}]: dispatch Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7b0236fc-1095-4ed0-a7af-e81e5f884e6a, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:01:05 localhost podman[321102]: 2026-02-23 10:01:05.701399481 +0000 UTC m=+0.044554882 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:01:05 localhost systemd[1]: Started libcrun container. Feb 23 05:01:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac1a6112974a145d8b8f64c794aac9d03dd462a4bba3a74f4fa25dcf2783300/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:01:05 localhost podman[321102]: 2026-02-23 10:01:05.821007853 +0000 UTC m=+0.164163254 container init fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e6186cf-b9fd-470d-bf12-9d317641eb20, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0) Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaea33d2-5529-46ec-832a-9d7cb7e5b233/.meta.tmp' Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaea33d2-5529-46ec-832a-9d7cb7e5b233/.meta.tmp' to config b'/volumes/_nogroup/eaea33d2-5529-46ec-832a-9d7cb7e5b233/.meta' Feb 23 05:01:05 localhost podman[321102]: 2026-02-23 10:01:05.835143465 +0000 UTC m=+0.178298786 container start fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e6186cf-b9fd-470d-bf12-9d317641eb20, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:01:05 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:05.839 2 INFO neutron.agent.securitygroups_rpc [None req-33f0d874-7a76-4789-bd46-4efe27d23136 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:05 localhost dnsmasq[321140]: started, version 2.85 cachesize 150 Feb 23 05:01:05 localhost dnsmasq[321140]: DNS service limited to local subnets Feb 23 05:01:05 localhost dnsmasq[321140]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:01:05 localhost dnsmasq[321140]: warning: no upstream servers configured Feb 23 05:01:05 localhost dnsmasq-dhcp[321140]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 05:01:05 localhost dnsmasq[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/addn_hosts - 0 addresses Feb 23 05:01:05 localhost dnsmasq-dhcp[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/host Feb 23 05:01:05 localhost dnsmasq-dhcp[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/opts Feb 23 05:01:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7b0236fc-1095-4ed0-a7af-e81e5f884e6a, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:01:05 localhost podman[321115]: 2026-02-23 10:01:05.88539653 +0000 UTC m=+0.106832914 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 05:01:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:05.887 263679 INFO neutron.agent.dhcp.agent [None req-c900068f-031e-4bea-9e93-7718c212a97e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:01:04Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=10cd06a9-7168-4bda-92e1-bd4b6152fc5f, ip_allocation=immediate, mac_address=fa:16:3e:bb:af:38, name=tempest-PortsIpV6TestJSON-12398363, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:01:03Z, description=, dns_domain=, id=5e6186cf-b9fd-470d-bf12-9d317641eb20, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-431679212, port_security_enabled=True, project_id=90343b3c0ce240adab2c21e5c92b6952, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=417, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2804, status=ACTIVE, subnets=['6669f945-bea0-401f-a9fe-5e78e73dd6b6'], tags=[], tenant_id=90343b3c0ce240adab2c21e5c92b6952, updated_at=2026-02-23T10:01:03Z, vlan_transparent=None, network_id=5e6186cf-b9fd-470d-bf12-9d317641eb20, port_security_enabled=True, project_id=90343b3c0ce240adab2c21e5c92b6952, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9a671aa5-4d76-4c1e-8de2-506f29ad907b'], standard_attr_id=2809, status=DOWN, tags=[], tenant_id=90343b3c0ce240adab2c21e5c92b6952, updated_at=2026-02-23T10:01:04Z on network 5e6186cf-b9fd-470d-bf12-9d317641eb20#033[00m Feb 23 05:01:05 localhost podman[321115]: 2026-02-23 10:01:05.923972068 +0000 UTC m=+0.145408402 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 05:01:05 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:01:05 localhost podman[321116]: 2026-02-23 10:01:05.969424196 +0000 UTC m=+0.189431726 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-type=git, maintainer=Red Hat, Inc., release=1770267347, vendor=Red Hat, Inc., distribution-scope=public, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, version=9.7, io.openshift.expose-services=, architecture=x86_64) Feb 23 05:01:05 localhost podman[321116]: 2026-02-23 10:01:05.98069877 +0000 UTC m=+0.200706260 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_id=openstack_network_exporter, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2026-02-05T04:57:10Z, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9) Feb 23 05:01:05 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:01:05 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:05.998 263679 INFO neutron.agent.dhcp.agent [None req-2a97ab3f-71d8-4449-a129-53cf0ebba843 - - - - - -] DHCP configuration for ports {'612f7ad4-3d88-4996-8277-e800a72303e4'} is completed#033[00m Feb 23 05:01:06 localhost dnsmasq[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/addn_hosts - 1 addresses Feb 23 05:01:06 localhost dnsmasq-dhcp[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/host Feb 23 05:01:06 localhost dnsmasq-dhcp[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/opts Feb 23 05:01:06 localhost podman[321176]: 2026-02-23 10:01:06.031182031 +0000 UTC m=+0.046996876 container kill fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e6186cf-b9fd-470d-bf12-9d317641eb20, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:01:06 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:06.213 263679 INFO neutron.agent.dhcp.agent [None req-c900068f-031e-4bea-9e93-7718c212a97e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:01:04Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=67de0b91-aac4-4403-a89f-1fedf1e17047, ip_allocation=immediate, mac_address=fa:16:3e:40:10:76, name=tempest-PortsIpV6TestJSON-484918517, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:01:03Z, description=, dns_domain=, id=5e6186cf-b9fd-470d-bf12-9d317641eb20, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-431679212, port_security_enabled=True, project_id=90343b3c0ce240adab2c21e5c92b6952, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=417, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2804, status=ACTIVE, subnets=['6669f945-bea0-401f-a9fe-5e78e73dd6b6'], tags=[], tenant_id=90343b3c0ce240adab2c21e5c92b6952, updated_at=2026-02-23T10:01:03Z, vlan_transparent=None, network_id=5e6186cf-b9fd-470d-bf12-9d317641eb20, port_security_enabled=True, project_id=90343b3c0ce240adab2c21e5c92b6952, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9a671aa5-4d76-4c1e-8de2-506f29ad907b'], standard_attr_id=2810, status=DOWN, tags=[], tenant_id=90343b3c0ce240adab2c21e5c92b6952, updated_at=2026-02-23T10:01:04Z on network 5e6186cf-b9fd-470d-bf12-9d317641eb20#033[00m Feb 23 05:01:06 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:06.256 2 INFO neutron.agent.securitygroups_rpc [None req-57a0e794-cdb7-43ae-ad96-b51298fae526 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:06 localhost dnsmasq[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/addn_hosts - 2 addresses Feb 23 05:01:06 localhost dnsmasq-dhcp[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/host Feb 23 05:01:06 localhost dnsmasq-dhcp[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/opts Feb 23 05:01:06 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:06.405 263679 INFO neutron.agent.dhcp.agent [None req-c310d637-cdc8-4c35-b1f5-25d97667a314 - - - - - -] DHCP configuration for ports {'10cd06a9-7168-4bda-92e1-bd4b6152fc5f'} is completed#033[00m Feb 23 05:01:06 localhost podman[321216]: 2026-02-23 10:01:06.406179733 +0000 UTC m=+0.058394265 container kill fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e6186cf-b9fd-470d-bf12-9d317641eb20, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 05:01:06 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:01:06 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:01:06 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:06.620 263679 INFO neutron.agent.dhcp.agent [None req-c7e24faa-decf-4718-a252-73bcbce3e91f - - - - - -] DHCP configuration for ports {'67de0b91-aac4-4403-a89f-1fedf1e17047'} is completed#033[00m Feb 23 05:01:06 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4/.meta.tmp' Feb 23 05:01:06 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4/.meta.tmp' to config b'/volumes/_nogroup/ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4/.meta' Feb 23 05:01:06 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:01:06 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "format": "json"}]: dispatch Feb 23 05:01:06 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:01:06 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:01:06 localhost dnsmasq[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/addn_hosts - 1 addresses Feb 23 05:01:06 localhost dnsmasq-dhcp[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/host Feb 23 05:01:06 localhost dnsmasq-dhcp[321140]: read /var/lib/neutron/dhcp/5e6186cf-b9fd-470d-bf12-9d317641eb20/opts Feb 23 05:01:06 localhost podman[321254]: 2026-02-23 10:01:06.719680856 +0000 UTC m=+0.055311780 container kill fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e6186cf-b9fd-470d-bf12-9d317641eb20, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 05:01:06 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:06.996 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 07759bda-913d-46f2-8a67-9e5004d02e86 with type ""#033[00m Feb 23 05:01:06 localhost ovn_controller[155966]: 2026-02-23T10:01:06Z|00342|binding|INFO|Removing iface tapf59f0d43-58 ovn-installed in OVS Feb 23 05:01:07 localhost ovn_controller[155966]: 2026-02-23T10:01:06Z|00343|binding|INFO|Removing lport f59f0d43-58b5-4a6b-a75b-838aeb42eb9b ovn-installed in OVS Feb 23 05:01:07 localhost nova_compute[280321]: 2026-02-23 10:01:06.998 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:07 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:06.998 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-5e6186cf-b9fd-470d-bf12-9d317641eb20', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e6186cf-b9fd-470d-bf12-9d317641eb20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90343b3c0ce240adab2c21e5c92b6952', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f744fad7-fba1-42c5-8ee5-3063b421eafb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f59f0d43-58b5-4a6b-a75b-838aeb42eb9b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:01:07 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:07.000 161842 INFO neutron.agent.ovn.metadata.agent [-] Port f59f0d43-58b5-4a6b-a75b-838aeb42eb9b in datapath 5e6186cf-b9fd-470d-bf12-9d317641eb20 unbound from our chassis#033[00m Feb 23 05:01:07 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:07.002 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5e6186cf-b9fd-470d-bf12-9d317641eb20 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:01:07 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:07.003 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[83a3f02e-8f55-4c7f-88ec-7b7d2b793bc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:01:07 localhost nova_compute[280321]: 2026-02-23 10:01:07.009 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:07 localhost systemd[1]: tmp-crun.hynCPt.mount: Deactivated successfully. Feb 23 05:01:07 localhost dnsmasq[321140]: exiting on receipt of SIGTERM Feb 23 05:01:07 localhost systemd[1]: libpod-fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359.scope: Deactivated successfully. Feb 23 05:01:07 localhost podman[321292]: 2026-02-23 10:01:07.106528759 +0000 UTC m=+0.071668300 container kill fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e6186cf-b9fd-470d-bf12-9d317641eb20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:01:07 localhost podman[321310]: 2026-02-23 10:01:07.182260321 +0000 UTC m=+0.051193505 container died fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e6186cf-b9fd-470d-bf12-9d317641eb20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS) Feb 23 05:01:07 localhost ceph-mgr[285904]: [devicehealth INFO root] Check health Feb 23 05:01:07 localhost podman[321310]: 2026-02-23 10:01:07.233622379 +0000 UTC m=+0.102555523 container remove fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e6186cf-b9fd-470d-bf12-9d317641eb20, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS) Feb 23 05:01:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 192 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 132 KiB/s rd, 2.7 MiB/s wr, 193 op/s Feb 23 05:01:07 localhost kernel: device tapf59f0d43-58 left promiscuous mode Feb 23 05:01:07 localhost systemd[1]: libpod-conmon-fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359.scope: Deactivated successfully. Feb 23 05:01:07 localhost nova_compute[280321]: 2026-02-23 10:01:07.278 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:07 localhost nova_compute[280321]: 2026-02-23 10:01:07.295 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:07.323 263679 INFO neutron.agent.dhcp.agent [None req-d179c09f-3cbf-4886-878a-249eabc48e0d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:01:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:07.323 263679 INFO neutron.agent.dhcp.agent [None req-d179c09f-3cbf-4886-878a-249eabc48e0d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:01:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:07.323 263679 INFO neutron.agent.dhcp.agent [None req-d179c09f-3cbf-4886-878a-249eabc48e0d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:01:07 localhost nova_compute[280321]: 2026-02-23 10:01:07.389 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e204 e204: 6 total, 6 up, 6 in Feb 23 05:01:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "5dbe1f82-6e63-4670-abcb-d97d24ea7f3d", "format": "json"}]: dispatch Feb 23 05:01:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5dbe1f82-6e63-4670-abcb-d97d24ea7f3d, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5dbe1f82-6e63-4670-abcb-d97d24ea7f3d, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:07 localhost systemd[1]: var-lib-containers-storage-overlay-2ac1a6112974a145d8b8f64c794aac9d03dd462a4bba3a74f4fa25dcf2783300-merged.mount: Deactivated successfully. Feb 23 05:01:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fed117e8651e3508b5eca8b40bb0208b5b78361f4adb66710a676722b8221359-userdata-shm.mount: Deactivated successfully. Feb 23 05:01:07 localhost systemd[1]: run-netns-qdhcp\x2d5e6186cf\x2db9fd\x2d470d\x2dbf12\x2d9d317641eb20.mount: Deactivated successfully. Feb 23 05:01:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e205 e205: 6 total, 6 up, 6 in Feb 23 05:01:08 localhost nova_compute[280321]: 2026-02-23 10:01:08.787 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:09 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eaea33d2-5529-46ec-832a-9d7cb7e5b233", "format": "json"}]: dispatch Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:01:09 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:01:09.036+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eaea33d2-5529-46ec-832a-9d7cb7e5b233' of type subvolume Feb 23 05:01:09 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eaea33d2-5529-46ec-832a-9d7cb7e5b233' of type subvolume Feb 23 05:01:09 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eaea33d2-5529-46ec-832a-9d7cb7e5b233", "force": true, "format": "json"}]: dispatch Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/eaea33d2-5529-46ec-832a-9d7cb7e5b233'' moved to trashcan Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eaea33d2-5529-46ec-832a-9d7cb7e5b233, vol_name:cephfs) < "" Feb 23 05:01:09 localhost nova_compute[280321]: 2026-02-23 10:01:09.206 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:09.222 263679 INFO neutron.agent.linux.ip_lib [None req-6e569738-ac60-44d2-8a54-dad71049f41a - - - - - -] Device tapdf29d98e-ca cannot be used as it has no MAC address#033[00m Feb 23 05:01:09 localhost nova_compute[280321]: 2026-02-23 10:01:09.245 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:09 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:09.250 2 INFO neutron.agent.securitygroups_rpc [None req-8f58242c-3b5b-4ffa-9190-0f405a707baa 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:09 localhost kernel: device tapdf29d98e-ca entered promiscuous mode Feb 23 05:01:09 localhost nova_compute[280321]: 2026-02-23 10:01:09.253 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:09 localhost ovn_controller[155966]: 2026-02-23T10:01:09Z|00344|binding|INFO|Claiming lport df29d98e-caea-4f96-b94e-65050e319049 for this chassis. Feb 23 05:01:09 localhost ovn_controller[155966]: 2026-02-23T10:01:09Z|00345|binding|INFO|df29d98e-caea-4f96-b94e-65050e319049: Claiming unknown Feb 23 05:01:09 localhost NetworkManager[5987]: [1771840869.2570] manager: (tapdf29d98e-ca): new Generic device (/org/freedesktop/NetworkManager/Devices/62) Feb 23 05:01:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 192 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 58 op/s Feb 23 05:01:09 localhost systemd-udevd[321341]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:01:09 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:09.267 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-36f531ba-c261-4e51-a8c0-81e9e0d15498', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36f531ba-c261-4e51-a8c0-81e9e0d15498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90343b3c0ce240adab2c21e5c92b6952', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb448636-e897-455b-831b-7f5fad761430, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=df29d98e-caea-4f96-b94e-65050e319049) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:01:09 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:09.270 161842 INFO neutron.agent.ovn.metadata.agent [-] Port df29d98e-caea-4f96-b94e-65050e319049 in datapath 36f531ba-c261-4e51-a8c0-81e9e0d15498 bound to our chassis#033[00m Feb 23 05:01:09 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:09.273 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36f531ba-c261-4e51-a8c0-81e9e0d15498 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:01:09 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:09.274 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[118b9ca2-f2b6-4a95-8984-052fae42890f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:01:09 localhost journal[229268]: ethtool ioctl error on tapdf29d98e-ca: No such device Feb 23 05:01:09 localhost nova_compute[280321]: 2026-02-23 10:01:09.305 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:09 localhost journal[229268]: ethtool ioctl error on tapdf29d98e-ca: No such device Feb 23 05:01:09 localhost ovn_controller[155966]: 2026-02-23T10:01:09Z|00346|binding|INFO|Setting lport df29d98e-caea-4f96-b94e-65050e319049 ovn-installed in OVS Feb 23 05:01:09 localhost ovn_controller[155966]: 2026-02-23T10:01:09Z|00347|binding|INFO|Setting lport df29d98e-caea-4f96-b94e-65050e319049 up in Southbound Feb 23 05:01:09 localhost nova_compute[280321]: 2026-02-23 10:01:09.310 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:09 localhost journal[229268]: ethtool ioctl error on tapdf29d98e-ca: No such device Feb 23 05:01:09 localhost journal[229268]: ethtool ioctl error on tapdf29d98e-ca: No such device Feb 23 05:01:09 localhost journal[229268]: ethtool ioctl error on tapdf29d98e-ca: No such device Feb 23 05:01:09 localhost journal[229268]: ethtool ioctl error on tapdf29d98e-ca: No such device Feb 23 05:01:09 localhost journal[229268]: ethtool ioctl error on tapdf29d98e-ca: No such device Feb 23 05:01:09 localhost journal[229268]: ethtool ioctl error on tapdf29d98e-ca: No such device Feb 23 05:01:09 localhost nova_compute[280321]: 2026-02-23 10:01:09.348 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:09 localhost nova_compute[280321]: 2026-02-23 10:01:09.380 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:09 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a63f65b-263e-4f0a-be43-9aace02f6e45", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, vol_name:cephfs) < "" Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a63f65b-263e-4f0a-be43-9aace02f6e45/.meta.tmp' Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a63f65b-263e-4f0a-be43-9aace02f6e45/.meta.tmp' to config b'/volumes/_nogroup/7a63f65b-263e-4f0a-be43-9aace02f6e45/.meta' Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, vol_name:cephfs) < "" Feb 23 05:01:09 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a63f65b-263e-4f0a-be43-9aace02f6e45", "format": "json"}]: dispatch Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, vol_name:cephfs) < "" Feb 23 05:01:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, vol_name:cephfs) < "" Feb 23 05:01:10 localhost podman[321412]: Feb 23 05:01:10 localhost podman[321412]: 2026-02-23 10:01:10.224651005 +0000 UTC m=+0.084705857 container create e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36f531ba-c261-4e51-a8c0-81e9e0d15498, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:01:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:01:10 localhost systemd[1]: Started libpod-conmon-e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e.scope. Feb 23 05:01:10 localhost systemd[1]: Started libcrun container. Feb 23 05:01:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6047edd98818e01dd79dd6f51d74417e53be7b0f392d269f3b777d82492d3a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:01:10 localhost podman[321412]: 2026-02-23 10:01:10.185668345 +0000 UTC m=+0.045723197 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:01:10 localhost podman[321412]: 2026-02-23 10:01:10.295071465 +0000 UTC m=+0.155126307 container init e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36f531ba-c261-4e51-a8c0-81e9e0d15498, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 05:01:10 localhost podman[321412]: 2026-02-23 10:01:10.312640082 +0000 UTC m=+0.172694924 container start e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36f531ba-c261-4e51-a8c0-81e9e0d15498, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 23 05:01:10 localhost dnsmasq[321442]: started, version 2.85 cachesize 150 Feb 23 05:01:10 localhost dnsmasq[321442]: DNS service limited to local subnets Feb 23 05:01:10 localhost dnsmasq[321442]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:01:10 localhost dnsmasq[321442]: warning: no upstream servers configured Feb 23 05:01:10 localhost dnsmasq-dhcp[321442]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 05:01:10 localhost dnsmasq[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/addn_hosts - 0 addresses Feb 23 05:01:10 localhost dnsmasq-dhcp[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/host Feb 23 05:01:10 localhost dnsmasq-dhcp[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/opts Feb 23 05:01:10 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:10.360 263679 INFO neutron.agent.dhcp.agent [None req-6e569738-ac60-44d2-8a54-dad71049f41a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:01:08Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=50ba0437-1c70-4e83-b0ef-13dbca64aad6, ip_allocation=immediate, mac_address=fa:16:3e:1e:33:f1, name=tempest-PortsIpV6TestJSON-1763226540, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:01:07Z, description=, dns_domain=, id=36f531ba-c261-4e51-a8c0-81e9e0d15498, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-154702688, port_security_enabled=True, project_id=90343b3c0ce240adab2c21e5c92b6952, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1525, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2831, status=ACTIVE, subnets=['fe82a54d-6783-4fde-bed9-0af003a12ca7'], tags=[], tenant_id=90343b3c0ce240adab2c21e5c92b6952, updated_at=2026-02-23T10:01:08Z, vlan_transparent=None, network_id=36f531ba-c261-4e51-a8c0-81e9e0d15498, port_security_enabled=True, project_id=90343b3c0ce240adab2c21e5c92b6952, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9a671aa5-4d76-4c1e-8de2-506f29ad907b'], standard_attr_id=2852, status=DOWN, tags=[], tenant_id=90343b3c0ce240adab2c21e5c92b6952, updated_at=2026-02-23T10:01:09Z on network 36f531ba-c261-4e51-a8c0-81e9e0d15498#033[00m Feb 23 05:01:10 localhost podman[321425]: 2026-02-23 10:01:10.384393903 +0000 UTC m=+0.125284187 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:01:10 localhost podman[321425]: 2026-02-23 10:01:10.45404655 +0000 UTC m=+0.194936844 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 05:01:10 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:01:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:10 localhost dnsmasq[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/addn_hosts - 1 addresses Feb 23 05:01:10 localhost dnsmasq-dhcp[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/host Feb 23 05:01:10 localhost podman[321473]: 2026-02-23 10:01:10.51265586 +0000 UTC m=+0.041742426 container kill e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36f531ba-c261-4e51-a8c0-81e9e0d15498, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216) Feb 23 05:01:10 localhost dnsmasq-dhcp[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/opts Feb 23 05:01:11 localhost systemd[1]: tmp-crun.lJJzaT.mount: Deactivated successfully. Feb 23 05:01:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 192 MiB data, 955 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 31 KiB/s wr, 53 op/s Feb 23 05:01:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e206 e206: 6 total, 6 up, 6 in Feb 23 05:01:12 localhost podman[241086]: time="2026-02-23T10:01:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:01:12 localhost podman[241086]: @ - - [23/Feb/2026:10:01:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155887 "" "Go-http-client/1.1" Feb 23 05:01:12 localhost podman[241086]: @ - - [23/Feb/2026:10:01:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18299 "" "Go-http-client/1.1" Feb 23 05:01:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v427: 177 pgs: 177 active+clean; 192 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 25 KiB/s wr, 5 op/s Feb 23 05:01:13 localhost nova_compute[280321]: 2026-02-23 10:01:13.788 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:14 localhost nova_compute[280321]: 2026-02-23 10:01:14.209 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 192 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 521 B/s rd, 19 KiB/s wr, 4 op/s Feb 23 05:01:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 193 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 463 B/s rd, 24 KiB/s wr, 4 op/s Feb 23 05:01:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e207 e207: 6 total, 6 up, 6 in Feb 23 05:01:18 localhost nova_compute[280321]: 2026-02-23 10:01:18.791 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:01:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:01:19 localhost systemd[1]: tmp-crun.JKV1mb.mount: Deactivated successfully. Feb 23 05:01:19 localhost podman[321495]: 2026-02-23 10:01:19.025226265 +0000 UTC m=+0.094359692 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible) Feb 23 05:01:19 localhost podman[321495]: 2026-02-23 10:01:19.037851591 +0000 UTC m=+0.106984988 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute) Feb 23 05:01:19 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:01:19 localhost podman[321494]: 2026-02-23 10:01:19.000025865 +0000 UTC m=+0.076620870 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Feb 23 05:01:19 localhost podman[321494]: 2026-02-23 10:01:19.083921047 +0000 UTC m=+0.160516062 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible) Feb 23 05:01:19 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:01:19 localhost nova_compute[280321]: 2026-02-23 10:01:19.210 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 193 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 8.0 KiB/s wr, 1 op/s Feb 23 05:01:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 8.4 KiB/s wr, 1 op/s Feb 23 05:01:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 7.4 KiB/s wr, 1 op/s Feb 23 05:01:23 localhost nova_compute[280321]: 2026-02-23 10:01:23.816 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:23 localhost nova_compute[280321]: 2026-02-23 10:01:23.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:01:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:01:24 localhost podman[321529]: 2026-02-23 10:01:24.013487918 +0000 UTC m=+0.085955136 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:01:24 localhost podman[321529]: 2026-02-23 10:01:24.022175284 +0000 UTC m=+0.094642512 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:01:24 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:01:24 localhost nova_compute[280321]: 2026-02-23 10:01:24.212 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 7.4 KiB/s wr, 1 op/s Feb 23 05:01:25 localhost podman[321662]: 2026-02-23 10:01:25.285303595 +0000 UTC m=+0.107340338 container exec 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, CEPH_POINT_RELEASE=, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main) Feb 23 05:01:25 localhost podman[321662]: 2026-02-23 10:01:25.374164039 +0000 UTC m=+0.196200762 container exec_died 291e3a40c62886663e21538650b4c83cb67b8246053f5d0c48947f7503d6cebd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-crash-np0005626465, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, vcs-type=git, version=7, RELEASE=main, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, maintainer=Guillaume Abrioux , ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 23 05:01:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 05:01:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 05:01:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 05:01:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:01:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:01:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:01:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:01:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:01:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 05:01:26 localhost ceph-mgr[285904]: [cephadm INFO root] Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 05:01:26 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 05:01:26 localhost ceph-mgr[285904]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 05:01:26 localhost ceph-mgr[285904]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 05:01:26 localhost ceph-mgr[285904]: [cephadm INFO root] Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 05:01:26 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 05:01:26 localhost ceph-mgr[285904]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 05:01:26 localhost ceph-mgr[285904]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 05:01:26 localhost ceph-mgr[285904]: [cephadm INFO root] Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 05:01:26 localhost ceph-mgr[285904]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 23 05:01:26 localhost ceph-mgr[285904]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 05:01:26 localhost ceph-mgr[285904]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 05:01:26 localhost nova_compute[280321]: 2026-02-23 10:01:26.888 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:01:26 localhost nova_compute[280321]: 2026-02-23 10:01:26.890 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:01:26 localhost nova_compute[280321]: 2026-02-23 10:01:26.891 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 05:01:26 localhost nova_compute[280321]: 2026-02-23 10:01:26.891 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 05:01:26 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 809fffc8-02a6-4fd9-864f-939f029f9983 (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:01:26 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 809fffc8-02a6-4fd9-864f-939f029f9983 (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:01:26 localhost ceph-mgr[285904]: [progress INFO root] Completed event 809fffc8-02a6-4fd9-864f-939f029f9983 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 05:01:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 05:01:26 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 05:01:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 1.8 KiB/s wr, 0 op/s Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:01:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:01:28 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626466.localdomain to 836.6M Feb 23 05:01:28 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626466.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 23 05:01:28 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626463.localdomain to 836.6M Feb 23 05:01:28 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626463.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 05:01:28 localhost ceph-mon[296755]: Adjusting osd_memory_target on np0005626465.localdomain to 836.6M Feb 23 05:01:28 localhost ceph-mon[296755]: Unable to set osd_memory_target on np0005626465.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 23 05:01:28 localhost nova_compute[280321]: 2026-02-23 10:01:28.820 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:29 localhost nova_compute[280321]: 2026-02-23 10:01:29.215 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s wr, 0 op/s Feb 23 05:01:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:30 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 05:01:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 05:01:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s wr, 0 op/s Feb 23 05:01:31 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:01:31 localhost openstack_network_exporter[243519]: ERROR 10:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:01:31 localhost openstack_network_exporter[243519]: Feb 23 05:01:31 localhost openstack_network_exporter[243519]: ERROR 10:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:01:31 localhost openstack_network_exporter[243519]: Feb 23 05:01:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Feb 23 05:01:33 localhost nova_compute[280321]: 2026-02-23 10:01:33.851 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:34 localhost nova_compute[280321]: 2026-02-23 10:01:34.218 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:01:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.138 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.138 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.139 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.139 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.139 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.140 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.140 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.140 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:01:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:01:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:01:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:01:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:01:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.385 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.386 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.386 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.387 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.388 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.410 280325 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 10.70 sec#033[00m Feb 23 05:01:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:01:35 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4257796391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:01:35 localhost nova_compute[280321]: 2026-02-23 10:01:35.837 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.025 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.026 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11622MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.026 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.026 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.319 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.320 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.376 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:01:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:36.401 263679 INFO neutron.agent.dhcp.agent [None req-656202ba-5c8e-461d-b352-86276f292dbf - - - - - -] DHCP configuration for ports {'5f3ae522-c2bc-4c0d-9809-81b52a8df953'} is completed#033[00m Feb 23 05:01:36 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:01:36 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2581822653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.839 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.846 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.888 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 05:01:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:01:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:01:36 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:36.955 263679 INFO neutron.agent.dhcp.agent [None req-8ff4af4f-6126-4107-bee7-f92c753589fd - - - - - -] DHCP configuration for ports {'50ba0437-1c70-4e83-b0ef-13dbca64aad6'} is completed#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.962 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 05:01:36 localhost nova_compute[280321]: 2026-02-23 10:01:36.963 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:01:37 localhost podman[321913]: 2026-02-23 10:01:37.031641677 +0000 UTC m=+0.098162819 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, architecture=x86_64, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, io.openshift.expose-services=) Feb 23 05:01:37 localhost podman[321912]: 2026-02-23 10:01:37.07856918 +0000 UTC m=+0.149465986 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 05:01:37 localhost podman[321913]: 2026-02-23 10:01:37.098993243 +0000 UTC m=+0.165514385 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1770267347, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, distribution-scope=public, architecture=x86_64, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.7, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container) Feb 23 05:01:37 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:01:37 localhost podman[321912]: 2026-02-23 10:01:37.116616831 +0000 UTC m=+0.187513587 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 05:01:37 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:01:37 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "1be5156e-560a-4ea6-aa3b-098d527fc684", "format": "json"}]: dispatch Feb 23 05:01:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1be5156e-560a-4ea6-aa3b-098d527fc684, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1be5156e-560a-4ea6-aa3b-098d527fc684, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Feb 23 05:01:37 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7a63f65b-263e-4f0a-be43-9aace02f6e45", "auth_id": "Joe", "tenant_id": "1b9d2e21adaa4adab3e6f69b48abf75a", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:01:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, tenant_id:1b9d2e21adaa4adab3e6f69b48abf75a, vol_name:cephfs) < "" Feb 23 05:01:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Feb 23 05:01:37 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 23 05:01:37 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID Joe with tenant 1b9d2e21adaa4adab3e6f69b48abf75a Feb 23 05:01:37 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:37.391 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:01:37 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:37.392 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 05:01:37 localhost nova_compute[280321]: 2026-02-23 10:01:37.433 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a63f65b-263e-4f0a-be43-9aace02f6e45/c5b2858c-2130-4d42-a52c-3132bbd1bfb7", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a63f65b-263e-4f0a-be43-9aace02f6e45", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:01:37 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a63f65b-263e-4f0a-be43-9aace02f6e45/c5b2858c-2130-4d42-a52c-3132bbd1bfb7", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a63f65b-263e-4f0a-be43-9aace02f6e45", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:01:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, tenant_id:1b9d2e21adaa4adab3e6f69b48abf75a, vol_name:cephfs) < "" Feb 23 05:01:38 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 23 05:01:38 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a63f65b-263e-4f0a-be43-9aace02f6e45/c5b2858c-2130-4d42-a52c-3132bbd1bfb7", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a63f65b-263e-4f0a-be43-9aace02f6e45", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:01:38 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a63f65b-263e-4f0a-be43-9aace02f6e45/c5b2858c-2130-4d42-a52c-3132bbd1bfb7", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a63f65b-263e-4f0a-be43-9aace02f6e45", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:01:38 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a63f65b-263e-4f0a-be43-9aace02f6e45/c5b2858c-2130-4d42-a52c-3132bbd1bfb7", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a63f65b-263e-4f0a-be43-9aace02f6e45", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:01:38 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:38.569 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:01:08Z, description=, device_id=964f6151-8c72-4a3f-b832-6fc7586e21b4, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=50ba0437-1c70-4e83-b0ef-13dbca64aad6, ip_allocation=immediate, mac_address=fa:16:3e:1e:33:f1, name=tempest-PortsIpV6TestJSON-1763226540, network_id=36f531ba-c261-4e51-a8c0-81e9e0d15498, port_security_enabled=True, project_id=90343b3c0ce240adab2c21e5c92b6952, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['9a671aa5-4d76-4c1e-8de2-506f29ad907b'], standard_attr_id=2852, status=DOWN, tags=[], tenant_id=90343b3c0ce240adab2c21e5c92b6952, updated_at=2026-02-23T10:01:09Z on network 36f531ba-c261-4e51-a8c0-81e9e0d15498#033[00m Feb 23 05:01:38 localhost podman[321973]: 2026-02-23 10:01:38.755842447 +0000 UTC m=+0.057490066 container kill e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36f531ba-c261-4e51-a8c0-81e9e0d15498, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 05:01:38 localhost dnsmasq[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/addn_hosts - 1 addresses Feb 23 05:01:38 localhost dnsmasq-dhcp[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/host Feb 23 05:01:38 localhost dnsmasq-dhcp[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/opts Feb 23 05:01:38 localhost nova_compute[280321]: 2026-02-23 10:01:38.890 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:38 localhost nova_compute[280321]: 2026-02-23 10:01:38.961 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:01:39 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:39.069 263679 INFO neutron.agent.dhcp.agent [None req-6658587d-5a73-4028-b9a4-bd3b8f36aca2 - - - - - -] DHCP configuration for ports {'50ba0437-1c70-4e83-b0ef-13dbca64aad6'} is completed#033[00m Feb 23 05:01:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "836ea875-40c0-4d41-ac93-96ac1cabc343", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:01:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:836ea875-40c0-4d41-ac93-96ac1cabc343, vol_name:cephfs) < "" Feb 23 05:01:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/836ea875-40c0-4d41-ac93-96ac1cabc343/.meta.tmp' Feb 23 05:01:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/836ea875-40c0-4d41-ac93-96ac1cabc343/.meta.tmp' to config b'/volumes/_nogroup/836ea875-40c0-4d41-ac93-96ac1cabc343/.meta' Feb 23 05:01:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:836ea875-40c0-4d41-ac93-96ac1cabc343, vol_name:cephfs) < "" Feb 23 05:01:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "836ea875-40c0-4d41-ac93-96ac1cabc343", "format": "json"}]: dispatch Feb 23 05:01:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:836ea875-40c0-4d41-ac93-96ac1cabc343, vol_name:cephfs) < "" Feb 23 05:01:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:836ea875-40c0-4d41-ac93-96ac1cabc343, vol_name:cephfs) < "" Feb 23 05:01:39 localhost nova_compute[280321]: 2026-02-23 10:01:39.220 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail Feb 23 05:01:39 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:39.677 2 INFO neutron.agent.securitygroups_rpc [None req-0ebe300c-0924-4a3a-86fe-0f106df51381 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:39 localhost podman[322010]: 2026-02-23 10:01:39.881878033 +0000 UTC m=+0.055617300 container kill e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36f531ba-c261-4e51-a8c0-81e9e0d15498, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 05:01:39 localhost dnsmasq[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/addn_hosts - 0 addresses Feb 23 05:01:39 localhost dnsmasq-dhcp[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/host Feb 23 05:01:39 localhost dnsmasq-dhcp[321442]: read /var/lib/neutron/dhcp/36f531ba-c261-4e51-a8c0-81e9e0d15498/opts Feb 23 05:01:40 localhost nova_compute[280321]: 2026-02-23 10:01:40.104 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:40 localhost ovn_controller[155966]: 2026-02-23T10:01:40Z|00348|binding|INFO|Releasing lport df29d98e-caea-4f96-b94e-65050e319049 from this chassis (sb_readonly=0) Feb 23 05:01:40 localhost ovn_controller[155966]: 2026-02-23T10:01:40Z|00349|binding|INFO|Setting lport df29d98e-caea-4f96-b94e-65050e319049 down in Southbound Feb 23 05:01:40 localhost kernel: device tapdf29d98e-ca left promiscuous mode Feb 23 05:01:40 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:40.121 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-36f531ba-c261-4e51-a8c0-81e9e0d15498', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36f531ba-c261-4e51-a8c0-81e9e0d15498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90343b3c0ce240adab2c21e5c92b6952', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb448636-e897-455b-831b-7f5fad761430, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=df29d98e-caea-4f96-b94e-65050e319049) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:01:40 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:40.123 161842 INFO neutron.agent.ovn.metadata.agent [-] Port df29d98e-caea-4f96-b94e-65050e319049 in datapath 36f531ba-c261-4e51-a8c0-81e9e0d15498 unbound from our chassis#033[00m Feb 23 05:01:40 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:40.125 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36f531ba-c261-4e51-a8c0-81e9e0d15498 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:01:40 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:40.126 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[9e11bdc3-faca-44e6-8c81-4f718d7de0e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:01:40 localhost nova_compute[280321]: 2026-02-23 10:01:40.140 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:01:40 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c3aedd71-b342-4920-afd2-d5c6fd4776d2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:01:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:40 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2/.meta.tmp' Feb 23 05:01:40 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2/.meta.tmp' to config b'/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2/.meta' Feb 23 05:01:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:41 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c3aedd71-b342-4920-afd2-d5c6fd4776d2", "format": "json"}]: dispatch Feb 23 05:01:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:41 localhost podman[322033]: 2026-02-23 10:01:41.015654694 +0000 UTC m=+0.090487994 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 05:01:41 localhost podman[322033]: 2026-02-23 10:01:41.047403784 +0000 UTC m=+0.122237034 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 05:01:41 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:01:41 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "a687b09a-63b9-4132-9be0-38d64393e4b6", "format": "json"}]: dispatch Feb 23 05:01:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a687b09a-63b9-4132-9be0-38d64393e4b6, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 2 op/s Feb 23 05:01:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a687b09a-63b9-4132-9be0-38d64393e4b6, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:41 localhost dnsmasq[321442]: exiting on receipt of SIGTERM Feb 23 05:01:41 localhost podman[322075]: 2026-02-23 10:01:41.281630566 +0000 UTC m=+0.061763847 container kill e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36f531ba-c261-4e51-a8c0-81e9e0d15498, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:01:41 localhost systemd[1]: libpod-e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e.scope: Deactivated successfully. Feb 23 05:01:41 localhost podman[322091]: 2026-02-23 10:01:41.357281046 +0000 UTC m=+0.054977460 container died e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36f531ba-c261-4e51-a8c0-81e9e0d15498, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 05:01:41 localhost systemd[1]: tmp-crun.rencMS.mount: Deactivated successfully. Feb 23 05:01:41 localhost podman[322091]: 2026-02-23 10:01:41.407744947 +0000 UTC m=+0.105441301 container remove e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36f531ba-c261-4e51-a8c0-81e9e0d15498, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0) Feb 23 05:01:41 localhost systemd[1]: libpod-conmon-e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e.scope: Deactivated successfully. Feb 23 05:01:41 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:41.444 263679 INFO neutron.agent.dhcp.agent [None req-72b41619-aa9f-4c5a-9551-35e394b6820f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:01:41 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:41.698 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:01:42 localhost systemd[1]: var-lib-containers-storage-overlay-f6047edd98818e01dd79dd6f51d74417e53be7b0f392d269f3b777d82492d3a2-merged.mount: Deactivated successfully. Feb 23 05:01:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4234a686751e02531a1f73f0e9d122ea876b4862bd527bd77776ad7f3d8407e-userdata-shm.mount: Deactivated successfully. Feb 23 05:01:42 localhost systemd[1]: run-netns-qdhcp\x2d36f531ba\x2dc261\x2d4e51\x2da8c0\x2d81e9e0d15498.mount: Deactivated successfully. Feb 23 05:01:42 localhost nova_compute[280321]: 2026-02-23 10:01:42.044 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:42 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:01:42 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1851732089' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:01:42 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:01:42 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1851732089' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:01:42 localhost sshd[322117]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:01:42 localhost podman[241086]: time="2026-02-23T10:01:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:01:42 localhost podman[241086]: @ - - [23/Feb/2026:10:01:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:01:42 localhost podman[241086]: @ - - [23/Feb/2026:10:01:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17818 "" "Go-http-client/1.1" Feb 23 05:01:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v443: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 2 op/s Feb 23 05:01:43 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:43.414 263679 INFO neutron.agent.linux.ip_lib [None req-778e6b6c-8277-457a-911d-a619a6d05f6e - - - - - -] Device tapf70d2521-04 cannot be used as it has no MAC address#033[00m Feb 23 05:01:43 localhost nova_compute[280321]: 2026-02-23 10:01:43.438 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:43 localhost kernel: device tapf70d2521-04 entered promiscuous mode Feb 23 05:01:43 localhost ovn_controller[155966]: 2026-02-23T10:01:43Z|00350|binding|INFO|Claiming lport f70d2521-04eb-4485-a990-5f1804cd1885 for this chassis. Feb 23 05:01:43 localhost nova_compute[280321]: 2026-02-23 10:01:43.446 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:43 localhost ovn_controller[155966]: 2026-02-23T10:01:43Z|00351|binding|INFO|f70d2521-04eb-4485-a990-5f1804cd1885: Claiming unknown Feb 23 05:01:43 localhost NetworkManager[5987]: [1771840903.4467] manager: (tapf70d2521-04): new Generic device (/org/freedesktop/NetworkManager/Devices/63) Feb 23 05:01:43 localhost systemd-udevd[322129]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:01:43 localhost journal[229268]: ethtool ioctl error on tapf70d2521-04: No such device Feb 23 05:01:43 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:43.477 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-86383a33-9c91-439d-8612-e7a4dae227e4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86383a33-9c91-439d-8612-e7a4dae227e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90343b3c0ce240adab2c21e5c92b6952', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c37bea0f-7573-4e21-b1b7-36f33c04103d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=f70d2521-04eb-4485-a990-5f1804cd1885) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:01:43 localhost journal[229268]: ethtool ioctl error on tapf70d2521-04: No such device Feb 23 05:01:43 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:43.480 161842 INFO neutron.agent.ovn.metadata.agent [-] Port f70d2521-04eb-4485-a990-5f1804cd1885 in datapath 86383a33-9c91-439d-8612-e7a4dae227e4 bound to our chassis#033[00m Feb 23 05:01:43 localhost journal[229268]: ethtool ioctl error on tapf70d2521-04: No such device Feb 23 05:01:43 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:43.482 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86383a33-9c91-439d-8612-e7a4dae227e4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:01:43 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:43.484 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5b144c-217f-4ace-9bfc-9ab757c5e06f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:01:43 localhost journal[229268]: ethtool ioctl error on tapf70d2521-04: No such device Feb 23 05:01:43 localhost nova_compute[280321]: 2026-02-23 10:01:43.486 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:43 localhost journal[229268]: ethtool ioctl error on tapf70d2521-04: No such device Feb 23 05:01:43 localhost ovn_controller[155966]: 2026-02-23T10:01:43Z|00352|binding|INFO|Setting lport f70d2521-04eb-4485-a990-5f1804cd1885 ovn-installed in OVS Feb 23 05:01:43 localhost ovn_controller[155966]: 2026-02-23T10:01:43Z|00353|binding|INFO|Setting lport f70d2521-04eb-4485-a990-5f1804cd1885 up in Southbound Feb 23 05:01:43 localhost nova_compute[280321]: 2026-02-23 10:01:43.490 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:43 localhost journal[229268]: ethtool ioctl error on tapf70d2521-04: No such device Feb 23 05:01:43 localhost journal[229268]: ethtool ioctl error on tapf70d2521-04: No such device Feb 23 05:01:43 localhost journal[229268]: ethtool ioctl error on tapf70d2521-04: No such device Feb 23 05:01:43 localhost nova_compute[280321]: 2026-02-23 10:01:43.520 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:43 localhost nova_compute[280321]: 2026-02-23 10:01:43.544 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:43 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:43.814 2 INFO neutron.agent.securitygroups_rpc [None req-72581ce1-6ecd-4794-bdfa-9dea488ea111 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['48663913-ae52-424c-8374-b7539096caba']#033[00m Feb 23 05:01:43 localhost nova_compute[280321]: 2026-02-23 10:01:43.927 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:44 localhost nova_compute[280321]: 2026-02-23 10:01:44.222 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:44 localhost podman[322200]: Feb 23 05:01:44 localhost podman[322200]: 2026-02-23 10:01:44.302707099 +0000 UTC m=+0.083235152 container create 01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 05:01:44 localhost systemd[1]: Started libpod-conmon-01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c.scope. Feb 23 05:01:44 localhost systemd[1]: tmp-crun.YC5nMZ.mount: Deactivated successfully. Feb 23 05:01:44 localhost podman[322200]: 2026-02-23 10:01:44.261304346 +0000 UTC m=+0.041832409 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:01:44 localhost systemd[1]: Started libcrun container. Feb 23 05:01:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/994f93b987c62ebd6e592dbfbcbae6a0811a9953624b120bd7434bf4a0ebd66d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:01:44 localhost podman[322200]: 2026-02-23 10:01:44.393021858 +0000 UTC m=+0.173549901 container init 01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:01:44 localhost podman[322200]: 2026-02-23 10:01:44.403032913 +0000 UTC m=+0.183560956 container start 01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:01:44 localhost dnsmasq[322218]: started, version 2.85 cachesize 150 Feb 23 05:01:44 localhost dnsmasq[322218]: DNS service limited to local subnets Feb 23 05:01:44 localhost dnsmasq[322218]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:01:44 localhost dnsmasq[322218]: warning: no upstream servers configured Feb 23 05:01:44 localhost dnsmasq-dhcp[322218]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 05:01:44 localhost dnsmasq[322218]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/addn_hosts - 0 addresses Feb 23 05:01:44 localhost dnsmasq-dhcp[322218]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/host Feb 23 05:01:44 localhost dnsmasq-dhcp[322218]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/opts Feb 23 05:01:44 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:44.468 263679 INFO neutron.agent.dhcp.agent [None req-778e6b6c-8277-457a-911d-a619a6d05f6e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:01:43Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a87ea12c-ceec-4d58-912a-341768108c83, ip_allocation=immediate, mac_address=fa:16:3e:f6:03:3a, name=tempest-PortsIpV6TestJSON-2017857254, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:00:44Z, description=, dns_domain=, id=86383a33-9c91-439d-8612-e7a4dae227e4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-1794254641, port_security_enabled=True, project_id=90343b3c0ce240adab2c21e5c92b6952, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7665, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2703, status=ACTIVE, subnets=['e7704f04-c89d-4b5f-a932-0885e8844e40'], tags=[], tenant_id=90343b3c0ce240adab2c21e5c92b6952, updated_at=2026-02-23T10:01:42Z, vlan_transparent=None, network_id=86383a33-9c91-439d-8612-e7a4dae227e4, port_security_enabled=True, project_id=90343b3c0ce240adab2c21e5c92b6952, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['48663913-ae52-424c-8374-b7539096caba'], standard_attr_id=2891, status=DOWN, tags=[], tenant_id=90343b3c0ce240adab2c21e5c92b6952, updated_at=2026-02-23T10:01:43Z on network 86383a33-9c91-439d-8612-e7a4dae227e4#033[00m Feb 23 05:01:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c3aedd71-b342-4920-afd2-d5c6fd4776d2", "auth_id": "Joe", "tenant_id": "4d2b2d5862b8427aac5a9c709976e3ff", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:01:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, tenant_id:4d2b2d5862b8427aac5a9c709976e3ff, vol_name:cephfs) < "" Feb 23 05:01:44 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Feb 23 05:01:44 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 23 05:01:44 localhost ceph-mgr[285904]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use Feb 23 05:01:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, tenant_id:4d2b2d5862b8427aac5a9c709976e3ff, vol_name:cephfs) < "" Feb 23 05:01:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:01:44.502+0000 7fc3ba4ad640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Feb 23 05:01:44 localhost ceph-mgr[285904]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Feb 23 05:01:44 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 23 05:01:44 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:44.597 263679 INFO neutron.agent.dhcp.agent [None req-f2a87610-6cb5-46ba-b6c4-b8ffe8c006c9 - - - - - -] DHCP configuration for ports {'286080b7-9e48-4c4f-90a1-6ef92a558211', '1143f39d-979b-4f97-b7d6-2702c447cd01'} is completed#033[00m Feb 23 05:01:44 localhost dnsmasq[322218]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/addn_hosts - 1 addresses Feb 23 05:01:44 localhost podman[322236]: 2026-02-23 10:01:44.644746855 +0000 UTC m=+0.057033564 container kill 01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0) Feb 23 05:01:44 localhost dnsmasq-dhcp[322218]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/host Feb 23 05:01:44 localhost dnsmasq-dhcp[322218]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/opts Feb 23 05:01:44 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:44.867 263679 INFO neutron.agent.dhcp.agent [None req-ad1e7108-90cb-496c-b78b-7238880e537d - - - - - -] DHCP configuration for ports {'a87ea12c-ceec-4d58-912a-341768108c83'} is completed#033[00m Feb 23 05:01:45 localhost dnsmasq[322218]: exiting on receipt of SIGTERM Feb 23 05:01:45 localhost podman[322274]: 2026-02-23 10:01:45.066627167 +0000 UTC m=+0.062536361 container kill 01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:01:45 localhost systemd[1]: libpod-01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c.scope: Deactivated successfully. Feb 23 05:01:45 localhost podman[322287]: 2026-02-23 10:01:45.12830492 +0000 UTC m=+0.051561185 container died 01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 05:01:45 localhost podman[322287]: 2026-02-23 10:01:45.161078961 +0000 UTC m=+0.084335186 container cleanup 01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 05:01:45 localhost systemd[1]: libpod-conmon-01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c.scope: Deactivated successfully. Feb 23 05:01:45 localhost podman[322289]: 2026-02-23 10:01:45.21376427 +0000 UTC m=+0.127643409 container remove 01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 05:01:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 2 op/s Feb 23 05:01:45 localhost systemd[1]: var-lib-containers-storage-overlay-994f93b987c62ebd6e592dbfbcbae6a0811a9953624b120bd7434bf4a0ebd66d-merged.mount: Deactivated successfully. Feb 23 05:01:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01b82bcc268eb04e79a6a1d96433977aefa9b6a9ddcd567717ea6a34b282931c-userdata-shm.mount: Deactivated successfully. Feb 23 05:01:45 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "836ea875-40c0-4d41-ac93-96ac1cabc343", "format": "json"}]: dispatch Feb 23 05:01:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:836ea875-40c0-4d41-ac93-96ac1cabc343, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:01:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:836ea875-40c0-4d41-ac93-96ac1cabc343, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:01:45 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:01:45.371+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '836ea875-40c0-4d41-ac93-96ac1cabc343' of type subvolume Feb 23 05:01:45 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '836ea875-40c0-4d41-ac93-96ac1cabc343' of type subvolume Feb 23 05:01:45 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "836ea875-40c0-4d41-ac93-96ac1cabc343", "force": true, "format": "json"}]: dispatch Feb 23 05:01:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:836ea875-40c0-4d41-ac93-96ac1cabc343, vol_name:cephfs) < "" Feb 23 05:01:45 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/836ea875-40c0-4d41-ac93-96ac1cabc343'' moved to trashcan Feb 23 05:01:45 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:01:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:836ea875-40c0-4d41-ac93-96ac1cabc343, vol_name:cephfs) < "" Feb 23 05:01:45 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "f524bd25-e758-44bb-a21d-5fd935532860", "format": "json"}]: dispatch Feb 23 05:01:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f524bd25-e758-44bb-a21d-5fd935532860, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f524bd25-e758-44bb-a21d-5fd935532860, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:45 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:45.596 2 INFO neutron.agent.securitygroups_rpc [None req-97c39ba0-e75b-404a-baf7-3d9225783656 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['48663913-ae52-424c-8374-b7539096caba', '3384fe18-0fab-4c8a-9159-5a07fe1d4f48']#033[00m Feb 23 05:01:45 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:45.971 2 INFO neutron.agent.securitygroups_rpc [None req-62619d1c-ae99-4a07-8196-e49ad1562f12 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['3384fe18-0fab-4c8a-9159-5a07fe1d4f48']#033[00m Feb 23 05:01:46 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:46.395 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 05:01:46 localhost podman[322368]: Feb 23 05:01:46 localhost podman[322368]: 2026-02-23 10:01:46.545470335 +0000 UTC m=+0.084007346 container create 5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:01:46 localhost systemd[1]: Started libpod-conmon-5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f.scope. Feb 23 05:01:46 localhost podman[322368]: 2026-02-23 10:01:46.503568006 +0000 UTC m=+0.042105047 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:01:46 localhost systemd[1]: tmp-crun.00zwxN.mount: Deactivated successfully. Feb 23 05:01:46 localhost systemd[1]: Started libcrun container. Feb 23 05:01:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bb31c3d94b709cd97248de1524e0bcb90a5e8ed4bc27f7881214032e0a45c7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:01:46 localhost podman[322368]: 2026-02-23 10:01:46.622406344 +0000 UTC m=+0.160943355 container init 5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:01:46 localhost podman[322368]: 2026-02-23 10:01:46.63339357 +0000 UTC m=+0.171930571 container start 5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:01:46 localhost dnsmasq[322387]: started, version 2.85 cachesize 150 Feb 23 05:01:46 localhost dnsmasq[322387]: DNS service limited to local subnets Feb 23 05:01:46 localhost dnsmasq[322387]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:01:46 localhost dnsmasq[322387]: warning: no upstream servers configured Feb 23 05:01:46 localhost dnsmasq-dhcp[322387]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 05:01:46 localhost dnsmasq-dhcp[322387]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Feb 23 05:01:46 localhost dnsmasq[322387]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/addn_hosts - 1 addresses Feb 23 05:01:46 localhost dnsmasq-dhcp[322387]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/host Feb 23 05:01:46 localhost dnsmasq-dhcp[322387]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/opts Feb 23 05:01:46 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:46.683 263679 INFO neutron.agent.dhcp.agent [None req-892d61bc-2a20-49a7-8bb7-085176b625c7 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:01:43Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a87ea12c-ceec-4d58-912a-341768108c83, ip_allocation=immediate, mac_address=fa:16:3e:f6:03:3a, name=tempest-PortsIpV6TestJSON-1593577065, network_id=86383a33-9c91-439d-8612-e7a4dae227e4, port_security_enabled=True, project_id=90343b3c0ce240adab2c21e5c92b6952, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['3384fe18-0fab-4c8a-9159-5a07fe1d4f48'], standard_attr_id=2891, status=DOWN, tags=[], tenant_id=90343b3c0ce240adab2c21e5c92b6952, updated_at=2026-02-23T10:01:45Z on network 86383a33-9c91-439d-8612-e7a4dae227e4#033[00m Feb 23 05:01:46 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:46.820 263679 INFO neutron.agent.dhcp.agent [None req-9bf0c938-9a2b-4ff2-a75a-baf0bd522186 - - - - - -] DHCP configuration for ports {'286080b7-9e48-4c4f-90a1-6ef92a558211', 'f70d2521-04eb-4485-a990-5f1804cd1885', '1143f39d-979b-4f97-b7d6-2702c447cd01', 'a87ea12c-ceec-4d58-912a-341768108c83'} is completed#033[00m Feb 23 05:01:46 localhost dnsmasq[322387]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/addn_hosts - 1 addresses Feb 23 05:01:46 localhost podman[322406]: 2026-02-23 10:01:46.877064781 +0000 UTC m=+0.057919699 container kill 5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:01:46 localhost dnsmasq-dhcp[322387]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/host Feb 23 05:01:46 localhost dnsmasq-dhcp[322387]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/opts Feb 23 05:01:47 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:47.103 263679 INFO neutron.agent.dhcp.agent [None req-69895536-978c-439a-a81d-5f49f5d082e0 - - - - - -] DHCP configuration for ports {'a87ea12c-ceec-4d58-912a-341768108c83'} is completed#033[00m Feb 23 05:01:47 localhost dnsmasq[322387]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/addn_hosts - 0 addresses Feb 23 05:01:47 localhost dnsmasq-dhcp[322387]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/host Feb 23 05:01:47 localhost dnsmasq-dhcp[322387]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/opts Feb 23 05:01:47 localhost podman[322445]: 2026-02-23 10:01:47.19237867 +0000 UTC m=+0.059915461 container kill 5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:01:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s wr, 3 op/s Feb 23 05:01:47 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c3aedd71-b342-4920-afd2-d5c6fd4776d2", "auth_id": "tempest-cephx-id-557795333", "tenant_id": "4d2b2d5862b8427aac5a9c709976e3ff", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:01:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-557795333, format:json, prefix:fs subvolume authorize, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, tenant_id:4d2b2d5862b8427aac5a9c709976e3ff, vol_name:cephfs) < "" Feb 23 05:01:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-557795333", "format": "json"} v 0) Feb 23 05:01:47 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-557795333", "format": "json"} : dispatch Feb 23 05:01:47 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID tempest-cephx-id-557795333 with tenant 4d2b2d5862b8427aac5a9c709976e3ff Feb 23 05:01:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-557795333", "caps": ["mds", "allow rw path=/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2/28f0b5f4-80b5-453b-a6b1-657204142a24", "osd", "allow rw pool=manila_data namespace=fsvolumens_c3aedd71-b342-4920-afd2-d5c6fd4776d2", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:01:47 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-557795333", "caps": ["mds", "allow rw path=/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2/28f0b5f4-80b5-453b-a6b1-657204142a24", "osd", "allow rw pool=manila_data namespace=fsvolumens_c3aedd71-b342-4920-afd2-d5c6fd4776d2", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:01:47 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-557795333", "format": "json"} : dispatch Feb 23 05:01:47 localhost systemd[1]: tmp-crun.TPvSbZ.mount: Deactivated successfully. Feb 23 05:01:47 localhost dnsmasq[322387]: exiting on receipt of SIGTERM Feb 23 05:01:47 localhost podman[322481]: 2026-02-23 10:01:47.611682314 +0000 UTC m=+0.059556731 container kill 5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 05:01:47 localhost systemd[1]: libpod-5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f.scope: Deactivated successfully. Feb 23 05:01:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-557795333, format:json, prefix:fs subvolume authorize, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, tenant_id:4d2b2d5862b8427aac5a9c709976e3ff, vol_name:cephfs) < "" Feb 23 05:01:47 localhost podman[322493]: 2026-02-23 10:01:47.700149515 +0000 UTC m=+0.075033543 container died 5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 05:01:47 localhost podman[322493]: 2026-02-23 10:01:47.737758284 +0000 UTC m=+0.112642272 container cleanup 5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:01:47 localhost systemd[1]: libpod-conmon-5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f.scope: Deactivated successfully. Feb 23 05:01:47 localhost podman[322500]: 2026-02-23 10:01:47.785724768 +0000 UTC m=+0.147582518 container remove 5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:01:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:48.317 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:01:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:48.317 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:01:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:48.318 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:01:48 localhost podman[322572]: Feb 23 05:01:48 localhost podman[322572]: 2026-02-23 10:01:48.510015655 +0000 UTC m=+0.129017581 container create a600c8308eb475a876c9d5fe193f3aff64770a1a234b885f129edc41d2048ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:01:48 localhost podman[322572]: 2026-02-23 10:01:48.423972628 +0000 UTC m=+0.042974584 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:01:48 localhost systemd[1]: Started libpod-conmon-a600c8308eb475a876c9d5fe193f3aff64770a1a234b885f129edc41d2048ee2.scope. Feb 23 05:01:48 localhost systemd[1]: tmp-crun.mUl3NA.mount: Deactivated successfully. Feb 23 05:01:48 localhost systemd[1]: var-lib-containers-storage-overlay-b1bb31c3d94b709cd97248de1524e0bcb90a5e8ed4bc27f7881214032e0a45c7-merged.mount: Deactivated successfully. Feb 23 05:01:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5616a4155ed24e4e6d460480f184076518274fe1c1e7d9103f248227fe4fc68f-userdata-shm.mount: Deactivated successfully. Feb 23 05:01:48 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:48.555 2 INFO neutron.agent.securitygroups_rpc [None req-deb2d108-d73d-4f3b-b409-b0d5aa38dbb2 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['7ccfcbe5-3a12-4044-a554-c033a2966e5e']#033[00m Feb 23 05:01:48 localhost systemd[1]: Started libcrun container. Feb 23 05:01:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7288de7ee5bd1dd9b1eb3e68187adcf1a375f87eb2fcd653d629e16bd5b89c3e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:01:48 localhost podman[322572]: 2026-02-23 10:01:48.580124976 +0000 UTC m=+0.199126902 container init a600c8308eb475a876c9d5fe193f3aff64770a1a234b885f129edc41d2048ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 05:01:48 localhost podman[322572]: 2026-02-23 10:01:48.591173213 +0000 UTC m=+0.210175129 container start a600c8308eb475a876c9d5fe193f3aff64770a1a234b885f129edc41d2048ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 05:01:48 localhost dnsmasq[322590]: started, version 2.85 cachesize 150 Feb 23 05:01:48 localhost dnsmasq[322590]: DNS service limited to local subnets Feb 23 05:01:48 localhost dnsmasq[322590]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:01:48 localhost dnsmasq[322590]: warning: no upstream servers configured Feb 23 05:01:48 localhost dnsmasq-dhcp[322590]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Feb 23 05:01:48 localhost dnsmasq[322590]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/addn_hosts - 0 addresses Feb 23 05:01:48 localhost dnsmasq-dhcp[322590]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/host Feb 23 05:01:48 localhost dnsmasq-dhcp[322590]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/opts Feb 23 05:01:48 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-557795333", "caps": ["mds", "allow rw path=/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2/28f0b5f4-80b5-453b-a6b1-657204142a24", "osd", "allow rw pool=manila_data namespace=fsvolumens_c3aedd71-b342-4920-afd2-d5c6fd4776d2", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:01:48 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-557795333", "caps": ["mds", "allow rw path=/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2/28f0b5f4-80b5-453b-a6b1-657204142a24", "osd", "allow rw pool=manila_data namespace=fsvolumens_c3aedd71-b342-4920-afd2-d5c6fd4776d2", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:01:48 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-557795333", "caps": ["mds", "allow rw path=/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2/28f0b5f4-80b5-453b-a6b1-657204142a24", "osd", "allow rw pool=manila_data namespace=fsvolumens_c3aedd71-b342-4920-afd2-d5c6fd4776d2", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:01:48 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:48.911 263679 INFO neutron.agent.dhcp.agent [None req-1a6b1cf5-cc1f-4798-9eae-f0e2c8e33907 - - - - - -] DHCP configuration for ports {'286080b7-9e48-4c4f-90a1-6ef92a558211', 'f70d2521-04eb-4485-a990-5f1804cd1885', '1143f39d-979b-4f97-b7d6-2702c447cd01'} is completed#033[00m Feb 23 05:01:48 localhost nova_compute[280321]: 2026-02-23 10:01:48.931 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:48 localhost podman[322608]: 2026-02-23 10:01:48.945135462 +0000 UTC m=+0.053520776 container kill a600c8308eb475a876c9d5fe193f3aff64770a1a234b885f129edc41d2048ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:01:48 localhost dnsmasq[322590]: exiting on receipt of SIGTERM Feb 23 05:01:48 localhost systemd[1]: libpod-a600c8308eb475a876c9d5fe193f3aff64770a1a234b885f129edc41d2048ee2.scope: Deactivated successfully. Feb 23 05:01:49 localhost podman[322624]: 2026-02-23 10:01:48.999983537 +0000 UTC m=+0.040871379 container died a600c8308eb475a876c9d5fe193f3aff64770a1a234b885f129edc41d2048ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:01:49 localhost podman[322624]: 2026-02-23 10:01:49.040328389 +0000 UTC m=+0.081216171 container remove a600c8308eb475a876c9d5fe193f3aff64770a1a234b885f129edc41d2048ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2) Feb 23 05:01:49 localhost systemd[1]: libpod-conmon-a600c8308eb475a876c9d5fe193f3aff64770a1a234b885f129edc41d2048ee2.scope: Deactivated successfully. Feb 23 05:01:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:01:49 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "f524bd25-e758-44bb-a21d-5fd935532860_b917d76c-b777-4089-bc48-72bb8837214f", "force": true, "format": "json"}]: dispatch Feb 23 05:01:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f524bd25-e758-44bb-a21d-5fd935532860_b917d76c-b777-4089-bc48-72bb8837214f, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:49 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:01:49 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:01:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f524bd25-e758-44bb-a21d-5fd935532860_b917d76c-b777-4089-bc48-72bb8837214f, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:49 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "f524bd25-e758-44bb-a21d-5fd935532860", "force": true, "format": "json"}]: dispatch Feb 23 05:01:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f524bd25-e758-44bb-a21d-5fd935532860, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:49 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:01:49 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:01:49 localhost podman[322651]: 2026-02-23 10:01:49.147001537 +0000 UTC m=+0.066955396 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:01:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:01:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f524bd25-e758-44bb-a21d-5fd935532860, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:49 localhost podman[322651]: 2026-02-23 10:01:49.18216987 +0000 UTC m=+0.102123629 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 23 05:01:49 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:01:49 localhost nova_compute[280321]: 2026-02-23 10:01:49.223 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:49 localhost podman[322670]: 2026-02-23 10:01:49.228827335 +0000 UTC m=+0.054467765 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216) Feb 23 05:01:49 localhost podman[322670]: 2026-02-23 10:01:49.260867503 +0000 UTC m=+0.086507883 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 05:01:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s wr, 3 op/s Feb 23 05:01:49 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:01:49 localhost systemd[1]: var-lib-containers-storage-overlay-7288de7ee5bd1dd9b1eb3e68187adcf1a375f87eb2fcd653d629e16bd5b89c3e-merged.mount: Deactivated successfully. Feb 23 05:01:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a600c8308eb475a876c9d5fe193f3aff64770a1a234b885f129edc41d2048ee2-userdata-shm.mount: Deactivated successfully. Feb 23 05:01:50 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:50.147 2 INFO neutron.agent.securitygroups_rpc [None req-80e2a041-7b6e-4f6c-b102-2630aa52a7b1 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['fdc27d14-90ab-419e-a68e-458d1f31be69', 'abc26baf-67f8-4703-8e61-63db3bbb7b3b', '7ccfcbe5-3a12-4044-a554-c033a2966e5e']#033[00m Feb 23 05:01:50 localhost podman[322741]: Feb 23 05:01:50 localhost podman[322741]: 2026-02-23 10:01:50.386092754 +0000 UTC m=+0.090088372 container create d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 05:01:50 localhost systemd[1]: Started libpod-conmon-d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8.scope. Feb 23 05:01:50 localhost systemd[1]: tmp-crun.QcnNzN.mount: Deactivated successfully. Feb 23 05:01:50 localhost podman[322741]: 2026-02-23 10:01:50.343606276 +0000 UTC m=+0.047601934 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:01:50 localhost systemd[1]: Started libcrun container. Feb 23 05:01:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3173007128e92069559d556b12fe4fb31fb045798a3c17102420941fe1dac461/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:01:50 localhost podman[322741]: 2026-02-23 10:01:50.471371137 +0000 UTC m=+0.175366765 container init d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0) Feb 23 05:01:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:50 localhost podman[322741]: 2026-02-23 10:01:50.480359283 +0000 UTC m=+0.184354901 container start d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:01:50 localhost dnsmasq[322759]: started, version 2.85 cachesize 150 Feb 23 05:01:50 localhost dnsmasq[322759]: DNS service limited to local subnets Feb 23 05:01:50 localhost dnsmasq[322759]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:01:50 localhost dnsmasq[322759]: warning: no upstream servers configured Feb 23 05:01:50 localhost dnsmasq-dhcp[322759]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Feb 23 05:01:50 localhost dnsmasq-dhcp[322759]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 05:01:50 localhost dnsmasq[322759]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/addn_hosts - 1 addresses Feb 23 05:01:50 localhost dnsmasq-dhcp[322759]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/host Feb 23 05:01:50 localhost dnsmasq-dhcp[322759]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/opts Feb 23 05:01:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cff56037-7760-4e22-a017-c787f60f0646", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:01:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:cff56037-7760-4e22-a017-c787f60f0646, vol_name:cephfs) < "" Feb 23 05:01:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cff56037-7760-4e22-a017-c787f60f0646/.meta.tmp' Feb 23 05:01:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cff56037-7760-4e22-a017-c787f60f0646/.meta.tmp' to config b'/volumes/_nogroup/cff56037-7760-4e22-a017-c787f60f0646/.meta' Feb 23 05:01:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:cff56037-7760-4e22-a017-c787f60f0646, vol_name:cephfs) < "" Feb 23 05:01:50 localhost sshd[322760]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:01:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cff56037-7760-4e22-a017-c787f60f0646", "format": "json"}]: dispatch Feb 23 05:01:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cff56037-7760-4e22-a017-c787f60f0646, vol_name:cephfs) < "" Feb 23 05:01:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cff56037-7760-4e22-a017-c787f60f0646, vol_name:cephfs) < "" Feb 23 05:01:50 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:50.833 2 INFO neutron.agent.securitygroups_rpc [None req-5d850956-08e8-4589-ad52-05619933d70c 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['abc26baf-67f8-4703-8e61-63db3bbb7b3b', 'fdc27d14-90ab-419e-a68e-458d1f31be69']#033[00m Feb 23 05:01:50 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:50.859 263679 INFO neutron.agent.dhcp.agent [None req-2d44ce76-ad4f-4f6a-9aa8-dcea1d199014 - - - - - -] DHCP configuration for ports {'286080b7-9e48-4c4f-90a1-6ef92a558211', 'f70d2521-04eb-4485-a990-5f1804cd1885', 'd3e3dc49-7112-488a-bbad-4b0fb9279447', '1143f39d-979b-4f97-b7d6-2702c447cd01'} is completed#033[00m Feb 23 05:01:50 localhost systemd[1]: tmp-crun.hpbXDy.mount: Deactivated successfully. Feb 23 05:01:50 localhost dnsmasq[322759]: exiting on receipt of SIGTERM Feb 23 05:01:50 localhost podman[322778]: 2026-02-23 10:01:50.937740329 +0000 UTC m=+0.075160296 container kill d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 05:01:50 localhost systemd[1]: libpod-d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8.scope: Deactivated successfully. Feb 23 05:01:51 localhost podman[322792]: 2026-02-23 10:01:51.006602472 +0000 UTC m=+0.055529886 container died d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 05:01:51 localhost podman[322792]: 2026-02-23 10:01:51.040083014 +0000 UTC m=+0.089010398 container cleanup d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:01:51 localhost systemd[1]: libpod-conmon-d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8.scope: Deactivated successfully. Feb 23 05:01:51 localhost podman[322794]: 2026-02-23 10:01:51.060389275 +0000 UTC m=+0.097632023 container remove d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, io.buildah.version=1.43.0) Feb 23 05:01:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c3aedd71-b342-4920-afd2-d5c6fd4776d2", "auth_id": "Joe", "format": "json"}]: dispatch Feb 23 05:01:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:51 localhost ceph-mgr[285904]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'c3aedd71-b342-4920-afd2-d5c6fd4776d2' Feb 23 05:01:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c3aedd71-b342-4920-afd2-d5c6fd4776d2", "auth_id": "Joe", "format": "json"}]: dispatch Feb 23 05:01:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2/28f0b5f4-80b5-453b-a6b1-657204142a24 Feb 23 05:01:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:01:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 193 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 2.1 KiB/s rd, 45 KiB/s wr, 10 op/s Feb 23 05:01:51 localhost systemd[1]: tmp-crun.b7dMk6.mount: Deactivated successfully. Feb 23 05:01:51 localhost systemd[1]: var-lib-containers-storage-overlay-3173007128e92069559d556b12fe4fb31fb045798a3c17102420941fe1dac461-merged.mount: Deactivated successfully. Feb 23 05:01:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d04837c9ac92a9761a3fdd094d65f67eb696abe0e88cf07e835a6085db5573b8-userdata-shm.mount: Deactivated successfully. Feb 23 05:01:52 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "a687b09a-63b9-4132-9be0-38d64393e4b6_ec6ac042-cb02-4a02-a780-56bcc558597b", "force": true, "format": "json"}]: dispatch Feb 23 05:01:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a687b09a-63b9-4132-9be0-38d64393e4b6_ec6ac042-cb02-4a02-a780-56bcc558597b, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:52 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:01:52 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:01:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a687b09a-63b9-4132-9be0-38d64393e4b6_ec6ac042-cb02-4a02-a780-56bcc558597b, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:52 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "a687b09a-63b9-4132-9be0-38d64393e4b6", "force": true, "format": "json"}]: dispatch Feb 23 05:01:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a687b09a-63b9-4132-9be0-38d64393e4b6, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:52 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:01:52 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:01:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a687b09a-63b9-4132-9be0-38d64393e4b6, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:52 localhost podman[322874]: Feb 23 05:01:52 localhost podman[322874]: 2026-02-23 10:01:52.556393957 +0000 UTC m=+0.082549972 container create 912ca1b3ca97fbd8c25c0cd746459757314d6023397eafbef3c6b8145579caf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 05:01:52 localhost systemd[1]: Started libpod-conmon-912ca1b3ca97fbd8c25c0cd746459757314d6023397eafbef3c6b8145579caf8.scope. Feb 23 05:01:52 localhost systemd[1]: Started libcrun container. Feb 23 05:01:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf7862559f8ab5f4dac2c59f89e2f3e34d4f9426c116b4995530587ad11417d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:01:52 localhost podman[322874]: 2026-02-23 10:01:52.616001957 +0000 UTC m=+0.142158042 container init 912ca1b3ca97fbd8c25c0cd746459757314d6023397eafbef3c6b8145579caf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 23 05:01:52 localhost podman[322874]: 2026-02-23 10:01:52.52474881 +0000 UTC m=+0.050904895 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:01:52 localhost podman[322874]: 2026-02-23 10:01:52.62560331 +0000 UTC m=+0.151759355 container start 912ca1b3ca97fbd8c25c0cd746459757314d6023397eafbef3c6b8145579caf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 05:01:52 localhost dnsmasq[322893]: started, version 2.85 cachesize 150 Feb 23 05:01:52 localhost dnsmasq[322893]: DNS service limited to local subnets Feb 23 05:01:52 localhost dnsmasq[322893]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:01:52 localhost dnsmasq[322893]: warning: no upstream servers configured Feb 23 05:01:52 localhost dnsmasq-dhcp[322893]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d Feb 23 05:01:52 localhost dnsmasq-dhcp[322893]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Feb 23 05:01:52 localhost dnsmasq-dhcp[322893]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 05:01:52 localhost dnsmasq[322893]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/addn_hosts - 1 addresses Feb 23 05:01:52 localhost dnsmasq-dhcp[322893]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/host Feb 23 05:01:52 localhost dnsmasq-dhcp[322893]: read /var/lib/neutron/dhcp/86383a33-9c91-439d-8612-e7a4dae227e4/opts Feb 23 05:01:52 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:52.910 263679 INFO neutron.agent.dhcp.agent [None req-748f1b56-0341-4bb3-a4a2-4461f86994e1 - - - - - -] DHCP configuration for ports {'286080b7-9e48-4c4f-90a1-6ef92a558211', 'f70d2521-04eb-4485-a990-5f1804cd1885', 'd3e3dc49-7112-488a-bbad-4b0fb9279447', '1143f39d-979b-4f97-b7d6-2702c447cd01'} is completed#033[00m Feb 23 05:01:52 localhost dnsmasq[322893]: exiting on receipt of SIGTERM Feb 23 05:01:52 localhost podman[322911]: 2026-02-23 10:01:52.925442286 +0000 UTC m=+0.061038774 container kill 912ca1b3ca97fbd8c25c0cd746459757314d6023397eafbef3c6b8145579caf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2) Feb 23 05:01:52 localhost systemd[1]: libpod-912ca1b3ca97fbd8c25c0cd746459757314d6023397eafbef3c6b8145579caf8.scope: Deactivated successfully. Feb 23 05:01:53 localhost podman[322928]: 2026-02-23 10:01:53.002327195 +0000 UTC m=+0.054035411 container died 912ca1b3ca97fbd8c25c0cd746459757314d6023397eafbef3c6b8145579caf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:01:53 localhost podman[322928]: 2026-02-23 10:01:53.098629985 +0000 UTC m=+0.150338161 container remove 912ca1b3ca97fbd8c25c0cd746459757314d6023397eafbef3c6b8145579caf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86383a33-9c91-439d-8612-e7a4dae227e4, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:01:53 localhost systemd[1]: libpod-conmon-912ca1b3ca97fbd8c25c0cd746459757314d6023397eafbef3c6b8145579caf8.scope: Deactivated successfully. Feb 23 05:01:53 localhost ovn_controller[155966]: 2026-02-23T10:01:53Z|00354|binding|INFO|Releasing lport f70d2521-04eb-4485-a990-5f1804cd1885 from this chassis (sb_readonly=0) Feb 23 05:01:53 localhost kernel: device tapf70d2521-04 left promiscuous mode Feb 23 05:01:53 localhost nova_compute[280321]: 2026-02-23 10:01:53.141 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:53 localhost ovn_controller[155966]: 2026-02-23T10:01:53Z|00355|binding|INFO|Setting lport f70d2521-04eb-4485-a990-5f1804cd1885 down in Southbound Feb 23 05:01:53 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:53.150 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8:0:2::2/64 2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-86383a33-9c91-439d-8612-e7a4dae227e4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86383a33-9c91-439d-8612-e7a4dae227e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90343b3c0ce240adab2c21e5c92b6952', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c37bea0f-7573-4e21-b1b7-36f33c04103d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=f70d2521-04eb-4485-a990-5f1804cd1885) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:01:53 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:53.152 161842 INFO neutron.agent.ovn.metadata.agent [-] Port f70d2521-04eb-4485-a990-5f1804cd1885 in datapath 86383a33-9c91-439d-8612-e7a4dae227e4 unbound from our chassis#033[00m Feb 23 05:01:53 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:53.154 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86383a33-9c91-439d-8612-e7a4dae227e4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:01:53 localhost ovn_metadata_agent[161837]: 2026-02-23 10:01:53.154 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[fec9fc7a-2bf5-4bb0-8232-8febaa907534]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:01:53 localhost nova_compute[280321]: 2026-02-23 10:01:53.175 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:53 localhost neutron_sriov_agent[256355]: 2026-02-23 10:01:53.182 2 INFO neutron.agent.securitygroups_rpc [None req-6b3edf75-e2b6-4f03-a982-c5a8a9628048 4ce76e2a79a849e8a6b3b31c05f9bc96 90343b3c0ce240adab2c21e5c92b6952 - - default default] Security group member updated ['9a671aa5-4d76-4c1e-8de2-506f29ad907b']#033[00m Feb 23 05:01:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 193 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 32 KiB/s wr, 19 op/s Feb 23 05:01:53 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:53.396 263679 INFO neutron.agent.dhcp.agent [None req-3eeac73c-0e64-41b9-8a4b-fccc7ca389a0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:01:53 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:53.397 263679 INFO neutron.agent.dhcp.agent [None req-3eeac73c-0e64-41b9-8a4b-fccc7ca389a0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:01:53 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:53.397 263679 INFO neutron.agent.dhcp.agent [None req-3eeac73c-0e64-41b9-8a4b-fccc7ca389a0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:01:53 localhost systemd[1]: var-lib-containers-storage-overlay-bf7862559f8ab5f4dac2c59f89e2f3e34d4f9426c116b4995530587ad11417d7-merged.mount: Deactivated successfully. Feb 23 05:01:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-912ca1b3ca97fbd8c25c0cd746459757314d6023397eafbef3c6b8145579caf8-userdata-shm.mount: Deactivated successfully. Feb 23 05:01:53 localhost systemd[1]: run-netns-qdhcp\x2d86383a33\x2d9c91\x2d439d\x2d8612\x2de7a4dae227e4.mount: Deactivated successfully. Feb 23 05:01:53 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:01:53.645 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:01:53 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e9472a91-114f-4670-a2a3-a4947279ea50", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:01:53 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:e9472a91-114f-4670-a2a3-a4947279ea50, vol_name:cephfs) < "" Feb 23 05:01:53 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e9472a91-114f-4670-a2a3-a4947279ea50/.meta.tmp' Feb 23 05:01:53 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e9472a91-114f-4670-a2a3-a4947279ea50/.meta.tmp' to config b'/volumes/_nogroup/e9472a91-114f-4670-a2a3-a4947279ea50/.meta' Feb 23 05:01:53 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:e9472a91-114f-4670-a2a3-a4947279ea50, vol_name:cephfs) < "" Feb 23 05:01:53 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e9472a91-114f-4670-a2a3-a4947279ea50", "format": "json"}]: dispatch Feb 23 05:01:53 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e9472a91-114f-4670-a2a3-a4947279ea50, vol_name:cephfs) < "" Feb 23 05:01:53 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e9472a91-114f-4670-a2a3-a4947279ea50, vol_name:cephfs) < "" Feb 23 05:01:53 localhost nova_compute[280321]: 2026-02-23 10:01:53.941 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:54 localhost nova_compute[280321]: 2026-02-23 10:01:54.259 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c3aedd71-b342-4920-afd2-d5c6fd4776d2", "auth_id": "tempest-cephx-id-557795333", "format": "json"}]: dispatch Feb 23 05:01:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-557795333, format:json, prefix:fs subvolume deauthorize, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-557795333", "format": "json"} v 0) Feb 23 05:01:54 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-557795333", "format": "json"} : dispatch Feb 23 05:01:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-557795333"} v 0) Feb 23 05:01:54 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-557795333"} : dispatch Feb 23 05:01:54 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-557795333", "format": "json"} : dispatch Feb 23 05:01:54 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-557795333"} : dispatch Feb 23 05:01:54 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-557795333"} : dispatch Feb 23 05:01:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-557795333, format:json, prefix:fs subvolume deauthorize, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c3aedd71-b342-4920-afd2-d5c6fd4776d2", "auth_id": "tempest-cephx-id-557795333", "format": "json"}]: dispatch Feb 23 05:01:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-557795333, format:json, prefix:fs subvolume evict, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-557795333, client_metadata.root=/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2/28f0b5f4-80b5-453b-a6b1-657204142a24 Feb 23 05:01:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:01:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-557795333, format:json, prefix:fs subvolume evict, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:01:55 localhost podman[322956]: 2026-02-23 10:01:55.009410163 +0000 UTC m=+0.085921384 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:01:55 localhost podman[322956]: 2026-02-23 10:01:55.047811976 +0000 UTC m=+0.124323177 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 05:01:55 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:01:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 193 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 32 KiB/s wr, 19 op/s Feb 23 05:01:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:01:55 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "1be5156e-560a-4ea6-aa3b-098d527fc684_4337db94-db06-4427-a272-8f09abd769af", "force": true, "format": "json"}]: dispatch Feb 23 05:01:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1be5156e-560a-4ea6-aa3b-098d527fc684_4337db94-db06-4427-a272-8f09abd769af, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:55 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:01:55 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:01:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1be5156e-560a-4ea6-aa3b-098d527fc684_4337db94-db06-4427-a272-8f09abd769af, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:55 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "1be5156e-560a-4ea6-aa3b-098d527fc684", "force": true, "format": "json"}]: dispatch Feb 23 05:01:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1be5156e-560a-4ea6-aa3b-098d527fc684, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:55 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:01:55 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:01:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1be5156e-560a-4ea6-aa3b-098d527fc684, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:55 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-557795333"}]': finished Feb 23 05:01:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e208 e208: 6 total, 6 up, 6 in Feb 23 05:01:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cff56037-7760-4e22-a017-c787f60f0646", "format": "json"}]: dispatch Feb 23 05:01:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cff56037-7760-4e22-a017-c787f60f0646, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:01:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 193 MiB data, 950 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 66 KiB/s wr, 27 op/s Feb 23 05:01:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cff56037-7760-4e22-a017-c787f60f0646, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:01:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:01:57.276+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cff56037-7760-4e22-a017-c787f60f0646' of type subvolume Feb 23 05:01:57 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cff56037-7760-4e22-a017-c787f60f0646' of type subvolume Feb 23 05:01:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cff56037-7760-4e22-a017-c787f60f0646", "force": true, "format": "json"}]: dispatch Feb 23 05:01:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cff56037-7760-4e22-a017-c787f60f0646, vol_name:cephfs) < "" Feb 23 05:01:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cff56037-7760-4e22-a017-c787f60f0646'' moved to trashcan Feb 23 05:01:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:01:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cff56037-7760-4e22-a017-c787f60f0646, vol_name:cephfs) < "" Feb 23 05:01:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7a63f65b-263e-4f0a-be43-9aace02f6e45", "auth_id": "Joe", "format": "json"}]: dispatch Feb 23 05:01:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, vol_name:cephfs) < "" Feb 23 05:01:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Feb 23 05:01:58 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 23 05:01:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) Feb 23 05:01:58 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, vol_name:cephfs) < "" Feb 23 05:01:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7a63f65b-263e-4f0a-be43-9aace02f6e45", "auth_id": "Joe", "format": "json"}]: dispatch Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, vol_name:cephfs) < "" Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/7a63f65b-263e-4f0a-be43-9aace02f6e45/c5b2858c-2130-4d42-a52c-3132bbd1bfb7 Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, vol_name:cephfs) < "" Feb 23 05:01:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "90cc3386-9efe-4ed5-a2c9-f8ae76a69843", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:01:58 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 23 05:01:58 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Feb 23 05:01:58 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Feb 23 05:01:58 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta.tmp' Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta.tmp' to config b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta' Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:01:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "90cc3386-9efe-4ed5-a2c9-f8ae76a69843", "format": "json"}]: dispatch Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:01:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:01:58 localhost nova_compute[280321]: 2026-02-23 10:01:58.968 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:59 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "5dbe1f82-6e63-4670-abcb-d97d24ea7f3d_a57b3e42-f791-4c80-9397-393854388632", "force": true, "format": "json"}]: dispatch Feb 23 05:01:59 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5dbe1f82-6e63-4670-abcb-d97d24ea7f3d_a57b3e42-f791-4c80-9397-393854388632, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:59 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:01:59 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:01:59 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5dbe1f82-6e63-4670-abcb-d97d24ea7f3d_a57b3e42-f791-4c80-9397-393854388632, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:59 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "5dbe1f82-6e63-4670-abcb-d97d24ea7f3d", "force": true, "format": "json"}]: dispatch Feb 23 05:01:59 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5dbe1f82-6e63-4670-abcb-d97d24ea7f3d, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:59 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:01:59 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:01:59 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5dbe1f82-6e63-4670-abcb-d97d24ea7f3d, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:01:59 localhost nova_compute[280321]: 2026-02-23 10:01:59.261 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:01:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 193 MiB data, 950 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 66 KiB/s wr, 27 op/s Feb 23 05:02:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e209 e209: 6 total, 6 up, 6 in Feb 23 05:02:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e9472a91-114f-4670-a2a3-a4947279ea50", "format": "json"}]: dispatch Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e9472a91-114f-4670-a2a3-a4947279ea50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e9472a91-114f-4670-a2a3-a4947279ea50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:00 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:00.502+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e9472a91-114f-4670-a2a3-a4947279ea50' of type subvolume Feb 23 05:02:00 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e9472a91-114f-4670-a2a3-a4947279ea50' of type subvolume Feb 23 05:02:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e9472a91-114f-4670-a2a3-a4947279ea50", "force": true, "format": "json"}]: dispatch Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e9472a91-114f-4670-a2a3-a4947279ea50, vol_name:cephfs) < "" Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e9472a91-114f-4670-a2a3-a4947279ea50'' moved to trashcan Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e9472a91-114f-4670-a2a3-a4947279ea50, vol_name:cephfs) < "" Feb 23 05:02:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3ce246a5-9df7-419d-94b3-cd5751260b5f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3ce246a5-9df7-419d-94b3-cd5751260b5f/.meta.tmp' Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3ce246a5-9df7-419d-94b3-cd5751260b5f/.meta.tmp' to config b'/volumes/_nogroup/3ce246a5-9df7-419d-94b3-cd5751260b5f/.meta' Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3ce246a5-9df7-419d-94b3-cd5751260b5f", "format": "json"}]: dispatch Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:00.855 263679 INFO neutron.agent.linux.ip_lib [None req-327dc2f5-d366-44df-a818-56776d90450d - - - - - -] Device tap29ca2f4a-78 cannot be used as it has no MAC address#033[00m Feb 23 05:02:00 localhost nova_compute[280321]: 2026-02-23 10:02:00.920 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:00 localhost kernel: device tap29ca2f4a-78 entered promiscuous mode Feb 23 05:02:00 localhost NetworkManager[5987]: [1771840920.9286] manager: (tap29ca2f4a-78): new Generic device (/org/freedesktop/NetworkManager/Devices/64) Feb 23 05:02:00 localhost ovn_controller[155966]: 2026-02-23T10:02:00Z|00356|binding|INFO|Claiming lport 29ca2f4a-78bb-44a7-a4d4-ad1b24f343d9 for this chassis. Feb 23 05:02:00 localhost ovn_controller[155966]: 2026-02-23T10:02:00Z|00357|binding|INFO|29ca2f4a-78bb-44a7-a4d4-ad1b24f343d9: Claiming unknown Feb 23 05:02:00 localhost nova_compute[280321]: 2026-02-23 10:02:00.930 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:00 localhost systemd-udevd[322990]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:02:00 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:00.943 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-c8a4ecb5-f616-4e14-a138-bcd4925ca95e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8a4ecb5-f616-4e14-a138-bcd4925ca95e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68a48b471ed84048aeb651374fff5111', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad7d9686-17cb-4806-b6ca-adabeec3161e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=29ca2f4a-78bb-44a7-a4d4-ad1b24f343d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:02:00 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:00.945 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 29ca2f4a-78bb-44a7-a4d4-ad1b24f343d9 in datapath c8a4ecb5-f616-4e14-a138-bcd4925ca95e bound to our chassis#033[00m Feb 23 05:02:00 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:00.947 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c8a4ecb5-f616-4e14-a138-bcd4925ca95e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:02:00 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:00.948 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[c8409abc-0639-4c92-b4cf-6f2b889069fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:02:00 localhost journal[229268]: ethtool ioctl error on tap29ca2f4a-78: No such device Feb 23 05:02:00 localhost ovn_controller[155966]: 2026-02-23T10:02:00Z|00358|binding|INFO|Setting lport 29ca2f4a-78bb-44a7-a4d4-ad1b24f343d9 ovn-installed in OVS Feb 23 05:02:00 localhost ovn_controller[155966]: 2026-02-23T10:02:00Z|00359|binding|INFO|Setting lport 29ca2f4a-78bb-44a7-a4d4-ad1b24f343d9 up in Southbound Feb 23 05:02:00 localhost nova_compute[280321]: 2026-02-23 10:02:00.964 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:00 localhost journal[229268]: ethtool ioctl error on tap29ca2f4a-78: No such device Feb 23 05:02:00 localhost nova_compute[280321]: 2026-02-23 10:02:00.966 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:00 localhost journal[229268]: ethtool ioctl error on tap29ca2f4a-78: No such device Feb 23 05:02:00 localhost journal[229268]: ethtool ioctl error on tap29ca2f4a-78: No such device Feb 23 05:02:00 localhost journal[229268]: ethtool ioctl error on tap29ca2f4a-78: No such device Feb 23 05:02:00 localhost journal[229268]: ethtool ioctl error on tap29ca2f4a-78: No such device Feb 23 05:02:00 localhost journal[229268]: ethtool ioctl error on tap29ca2f4a-78: No such device Feb 23 05:02:00 localhost journal[229268]: ethtool ioctl error on tap29ca2f4a-78: No such device Feb 23 05:02:01 localhost nova_compute[280321]: 2026-02-23 10:02:01.002 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:01 localhost nova_compute[280321]: 2026-02-23 10:02:01.020 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e210 e210: 6 total, 6 up, 6 in Feb 23 05:02:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 194 MiB data, 951 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 146 KiB/s wr, 19 op/s Feb 23 05:02:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "auth_id": "admin", "tenant_id": "1b9d2e21adaa4adab3e6f69b48abf75a", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:02:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, tenant_id:1b9d2e21adaa4adab3e6f69b48abf75a, vol_name:cephfs) < "" Feb 23 05:02:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) Feb 23 05:02:01 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Feb 23 05:02:01 localhost ceph-mgr[285904]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify Feb 23 05:02:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, tenant_id:1b9d2e21adaa4adab3e6f69b48abf75a, vol_name:cephfs) < "" Feb 23 05:02:01 localhost ceph-mgr[285904]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Feb 23 05:02:01 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:01.458+0000 7fc3ba4ad640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Feb 23 05:02:01 localhost podman[323061]: Feb 23 05:02:01 localhost podman[323061]: 2026-02-23 10:02:01.756007692 +0000 UTC m=+0.076657542 container create 975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8a4ecb5-f616-4e14-a138-bcd4925ca95e, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:02:01 localhost systemd[1]: Started libpod-conmon-975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142.scope. Feb 23 05:02:01 localhost systemd[1]: tmp-crun.E7AwGv.mount: Deactivated successfully. Feb 23 05:02:01 localhost systemd[1]: Started libcrun container. Feb 23 05:02:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "90cc3386-9efe-4ed5-a2c9-f8ae76a69843", "snap_name": "e9f999c5-8797-4bf3-86d8-5475823561e0", "format": "json"}]: dispatch Feb 23 05:02:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e9f999c5-8797-4bf3-86d8-5475823561e0, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/423dd003b1c30564bd0f3f6e9620b74fc5a7c16fd5f200e3897435fa21ac67bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:02:01 localhost podman[323061]: 2026-02-23 10:02:01.719947171 +0000 UTC m=+0.040597061 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:02:01 localhost podman[323061]: 2026-02-23 10:02:01.821276275 +0000 UTC m=+0.141926105 container init 975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8a4ecb5-f616-4e14-a138-bcd4925ca95e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 05:02:01 localhost systemd[1]: tmp-crun.zAD242.mount: Deactivated successfully. Feb 23 05:02:01 localhost podman[323061]: 2026-02-23 10:02:01.831644012 +0000 UTC m=+0.152293842 container start 975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8a4ecb5-f616-4e14-a138-bcd4925ca95e, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0) Feb 23 05:02:01 localhost dnsmasq[323080]: started, version 2.85 cachesize 150 Feb 23 05:02:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e9f999c5-8797-4bf3-86d8-5475823561e0, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:01 localhost dnsmasq[323080]: DNS service limited to local subnets Feb 23 05:02:01 localhost dnsmasq[323080]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:02:01 localhost dnsmasq[323080]: warning: no upstream servers configured Feb 23 05:02:01 localhost dnsmasq-dhcp[323080]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 05:02:01 localhost dnsmasq[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/addn_hosts - 0 addresses Feb 23 05:02:01 localhost dnsmasq-dhcp[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/host Feb 23 05:02:01 localhost dnsmasq-dhcp[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/opts Feb 23 05:02:01 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:01.893 263679 INFO neutron.agent.dhcp.agent [None req-327dc2f5-d366-44df-a818-56776d90450d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:02:00Z, description=, device_id=d11c654a-ee3c-4ca8-93a5-f268ff8b2e3b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3c8e8a20-a240-47ee-b0dc-f9ea86db764c, ip_allocation=immediate, mac_address=fa:16:3e:e7:cb:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:01:58Z, description=, dns_domain=, id=c8a4ecb5-f616-4e14-a138-bcd4925ca95e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2363181, port_security_enabled=True, project_id=68a48b471ed84048aeb651374fff5111, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38052, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2970, status=ACTIVE, subnets=['cc0ca376-8226-43e9-a0d7-94d125c3a424'], tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:01:59Z, vlan_transparent=None, network_id=c8a4ecb5-f616-4e14-a138-bcd4925ca95e, port_security_enabled=False, project_id=68a48b471ed84048aeb651374fff5111, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2981, status=DOWN, tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:00Z on network c8a4ecb5-f616-4e14-a138-bcd4925ca95e#033[00m Feb 23 05:02:02 localhost openstack_network_exporter[243519]: ERROR 10:02:02 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:02:02 localhost openstack_network_exporter[243519]: Feb 23 05:02:02 localhost openstack_network_exporter[243519]: ERROR 10:02:02 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:02:02 localhost openstack_network_exporter[243519]: Feb 23 05:02:02 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "a6576274-fdc3-4c38-8214-f23c6386f2cf_f3807254-0afd-4ece-ae6b-65f6e63da3b8", "force": true, "format": "json"}]: dispatch Feb 23 05:02:02 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a6576274-fdc3-4c38-8214-f23c6386f2cf_f3807254-0afd-4ece-ae6b-65f6e63da3b8, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:02:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:02:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:02:02 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a6576274-fdc3-4c38-8214-f23c6386f2cf_f3807254-0afd-4ece-ae6b-65f6e63da3b8, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:02:02 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "snap_name": "a6576274-fdc3-4c38-8214-f23c6386f2cf", "force": true, "format": "json"}]: dispatch Feb 23 05:02:02 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a6576274-fdc3-4c38-8214-f23c6386f2cf, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:02:02 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:02.056 263679 INFO neutron.agent.dhcp.agent [None req-0fb63967-a9d1-4701-955b-9bbbd1756fc3 - - - - - -] DHCP configuration for ports {'a1dddd7c-d4e4-437c-8054-6ea0e5147832'} is completed#033[00m Feb 23 05:02:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' Feb 23 05:02:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta.tmp' to config b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f/.meta' Feb 23 05:02:02 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a6576274-fdc3-4c38-8214-f23c6386f2cf, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:02:02 localhost dnsmasq[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/addn_hosts - 1 addresses Feb 23 05:02:02 localhost dnsmasq-dhcp[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/host Feb 23 05:02:02 localhost dnsmasq-dhcp[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/opts Feb 23 05:02:02 localhost podman[323098]: 2026-02-23 10:02:02.097468969 +0000 UTC m=+0.047365987 container kill 975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8a4ecb5-f616-4e14-a138-bcd4925ca95e, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 05:02:02 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Feb 23 05:02:02 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:02.182 263679 INFO neutron.agent.dhcp.agent [None req-327dc2f5-d366-44df-a818-56776d90450d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:02:00Z, description=, device_id=d11c654a-ee3c-4ca8-93a5-f268ff8b2e3b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3c8e8a20-a240-47ee-b0dc-f9ea86db764c, ip_allocation=immediate, mac_address=fa:16:3e:e7:cb:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:01:58Z, description=, dns_domain=, id=c8a4ecb5-f616-4e14-a138-bcd4925ca95e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2363181, port_security_enabled=True, project_id=68a48b471ed84048aeb651374fff5111, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38052, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2970, status=ACTIVE, subnets=['cc0ca376-8226-43e9-a0d7-94d125c3a424'], tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:01:59Z, vlan_transparent=None, network_id=c8a4ecb5-f616-4e14-a138-bcd4925ca95e, port_security_enabled=False, project_id=68a48b471ed84048aeb651374fff5111, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2981, status=DOWN, tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:00Z on network c8a4ecb5-f616-4e14-a138-bcd4925ca95e#033[00m Feb 23 05:02:02 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:02.337 263679 INFO neutron.agent.dhcp.agent [None req-f470fea8-2504-46c7-b336-967def5d616d - - - - - -] DHCP configuration for ports {'3c8e8a20-a240-47ee-b0dc-f9ea86db764c'} is completed#033[00m Feb 23 05:02:02 localhost dnsmasq[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/addn_hosts - 1 addresses Feb 23 05:02:02 localhost dnsmasq-dhcp[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/host Feb 23 05:02:02 localhost dnsmasq-dhcp[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/opts Feb 23 05:02:02 localhost podman[323135]: 2026-02-23 10:02:02.35522118 +0000 UTC m=+0.057106815 container kill 975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8a4ecb5-f616-4e14-a138-bcd4925ca95e, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 23 05:02:02 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e211 e211: 6 total, 6 up, 6 in Feb 23 05:02:02 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:02.598 263679 INFO neutron.agent.dhcp.agent [None req-a4d2b675-1496-4bec-b1ea-dfe9d6ccd1de - - - - - -] DHCP configuration for ports {'3c8e8a20-a240-47ee-b0dc-f9ea86db764c'} is completed#033[00m Feb 23 05:02:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 194 MiB data, 952 MiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 76 KiB/s wr, 11 op/s Feb 23 05:02:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3ce246a5-9df7-419d-94b3-cd5751260b5f", "snap_name": "4304fccc-23e4-4511-ab8d-92fee67e08e5", "format": "json"}]: dispatch Feb 23 05:02:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4304fccc-23e4-4511-ab8d-92fee67e08e5, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4304fccc-23e4-4511-ab8d-92fee67e08e5, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:03 localhost nova_compute[280321]: 2026-02-23 10:02:03.996 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:04 localhost nova_compute[280321]: 2026-02-23 10:02:04.261 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "auth_id": "david", "tenant_id": "1b9d2e21adaa4adab3e6f69b48abf75a", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:02:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, tenant_id:1b9d2e21adaa4adab3e6f69b48abf75a, vol_name:cephfs) < "" Feb 23 05:02:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Feb 23 05:02:04 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 23 05:02:04 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID david with tenant 1b9d2e21adaa4adab3e6f69b48abf75a Feb 23 05:02:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4/555eb612-b64e-47a9-a736-f6e77cc4a819", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:02:04 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4/555eb612-b64e-47a9-a736-f6e77cc4a819", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, tenant_id:1b9d2e21adaa4adab3e6f69b48abf75a, vol_name:cephfs) < "" Feb 23 05:02:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:02:05 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/989643730' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:02:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:02:05 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/989643730' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:02:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_10:02:05 Feb 23 05:02:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 05:02:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 05:02:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['manila_data', 'manila_metadata', 'vms', '.mgr', 'volumes', 'images', 'backups'] Feb 23 05:02:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:02:05 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 23 05:02:05 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4/555eb612-b64e-47a9-a736-f6e77cc4a819", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:05 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4/555eb612-b64e-47a9-a736-f6e77cc4a819", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:05 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4/555eb612-b64e-47a9-a736-f6e77cc4a819", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.194092) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840925194142, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2801, "num_deletes": 275, "total_data_size": 4864277, "memory_usage": 4952016, "flush_reason": "Manual Compaction"} Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840925208393, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 3180588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26450, "largest_seqno": 29246, "table_properties": {"data_size": 3169010, "index_size": 7443, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 28004, "raw_average_key_size": 22, "raw_value_size": 3144663, "raw_average_value_size": 2579, "num_data_blocks": 311, "num_entries": 1219, "num_filter_entries": 1219, "num_deletions": 275, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840797, "oldest_key_time": 1771840797, "file_creation_time": 1771840925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 14388 microseconds, and 7261 cpu microseconds. Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.208488) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 3180588 bytes OK Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.208515) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.210248) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.210273) EVENT_LOG_v1 {"time_micros": 1771840925210266, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.210294) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 4851043, prev total WAL file size 4851043, number of live WAL files 2. Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.211467) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(3106KB)], [45(14MB)] Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840925211506, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 18594634, "oldest_snapshot_seqno": -1} Feb 23 05:02:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 194 MiB data, 952 MiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 76 KiB/s wr, 11 op/s Feb 23 05:02:05 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "format": "json"}]: dispatch Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7b2a2c48-552a-4612-b861-e74517b6032f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7b2a2c48-552a-4612-b861-e74517b6032f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:05 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:05.286+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7b2a2c48-552a-4612-b861-e74517b6032f' of type subvolume Feb 23 05:02:05 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7b2a2c48-552a-4612-b861-e74517b6032f' of type subvolume Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 13268 keys, 17389509 bytes, temperature: kUnknown Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840925289762, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 17389509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17313911, "index_size": 41321, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33221, "raw_key_size": 355930, "raw_average_key_size": 26, "raw_value_size": 17087982, "raw_average_value_size": 1287, "num_data_blocks": 1557, "num_entries": 13268, "num_filter_entries": 13268, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771840925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.290065) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 17389509 bytes Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.291590) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.3 rd, 222.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 14.7 +0.0 blob) out(16.6 +0.0 blob), read-write-amplify(11.3) write-amplify(5.5) OK, records in: 13832, records dropped: 564 output_compression: NoCompression Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.291617) EVENT_LOG_v1 {"time_micros": 1771840925291605, "job": 26, "event": "compaction_finished", "compaction_time_micros": 78344, "compaction_time_cpu_micros": 45117, "output_level": 6, "num_output_files": 1, "total_output_size": 17389509, "num_input_records": 13832, "num_output_records": 13268, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840925292389, "job": 26, "event": "table_file_deletion", "file_number": 47} Feb 23 05:02:05 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7b2a2c48-552a-4612-b861-e74517b6032f", "force": true, "format": "json"}]: dispatch Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771840925294352, "job": 26, "event": "table_file_deletion", "file_number": 45} Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.211371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.294485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.294491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.294495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.294498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:02:05 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:02:05.294501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7b2a2c48-552a-4612-b861-e74517b6032f'' moved to trashcan Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7b2a2c48-552a-4612-b861-e74517b6032f, vol_name:cephfs) < "" Feb 23 05:02:05 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "90cc3386-9efe-4ed5-a2c9-f8ae76a69843", "snap_name": "dc78da5f-f2b8-4261-af0a-8d02dae8fe58", "format": "json"}]: dispatch Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:dc78da5f-f2b8-4261-af0a-8d02dae8fe58, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 05:02:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:02:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 05:02:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-06 of space, bias 1.0, pg target 0.0005425347222222222 quantized to 32 (current 32) Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:02:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00015321834868788387 of space, bias 4.0, pg target 0.12196180555555557 quantized to 16 (current 16) Feb 23 05:02:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:02:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:dc78da5f-f2b8-4261-af0a-8d02dae8fe58, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:02:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:02:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:02:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:02:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:02:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:05 localhost dnsmasq[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/addn_hosts - 0 addresses Feb 23 05:02:05 localhost dnsmasq-dhcp[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/host Feb 23 05:02:05 localhost podman[323171]: 2026-02-23 10:02:05.503862158 +0000 UTC m=+0.060615931 container kill 975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8a4ecb5-f616-4e14-a138-bcd4925ca95e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:02:05 localhost dnsmasq-dhcp[323080]: read /var/lib/neutron/dhcp/c8a4ecb5-f616-4e14-a138-bcd4925ca95e/opts Feb 23 05:02:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e212 e212: 6 total, 6 up, 6 in Feb 23 05:02:05 localhost nova_compute[280321]: 2026-02-23 10:02:05.695 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:05 localhost ovn_controller[155966]: 2026-02-23T10:02:05Z|00360|binding|INFO|Releasing lport 29ca2f4a-78bb-44a7-a4d4-ad1b24f343d9 from this chassis (sb_readonly=0) Feb 23 05:02:05 localhost kernel: device tap29ca2f4a-78 left promiscuous mode Feb 23 05:02:05 localhost ovn_controller[155966]: 2026-02-23T10:02:05Z|00361|binding|INFO|Setting lport 29ca2f4a-78bb-44a7-a4d4-ad1b24f343d9 down in Southbound Feb 23 05:02:05 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:05.710 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-c8a4ecb5-f616-4e14-a138-bcd4925ca95e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8a4ecb5-f616-4e14-a138-bcd4925ca95e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68a48b471ed84048aeb651374fff5111', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad7d9686-17cb-4806-b6ca-adabeec3161e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=29ca2f4a-78bb-44a7-a4d4-ad1b24f343d9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:02:05 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:05.713 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 29ca2f4a-78bb-44a7-a4d4-ad1b24f343d9 in datapath c8a4ecb5-f616-4e14-a138-bcd4925ca95e unbound from our chassis#033[00m Feb 23 05:02:05 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:05.715 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c8a4ecb5-f616-4e14-a138-bcd4925ca95e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:02:05 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:05.716 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[ee58849c-c7c3-4e7a-acce-882d31f6c779]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:02:05 localhost nova_compute[280321]: 2026-02-23 10:02:05.722 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:06 localhost dnsmasq[323080]: exiting on receipt of SIGTERM Feb 23 05:02:06 localhost podman[323210]: 2026-02-23 10:02:06.437946462 +0000 UTC m=+0.053036510 container kill 975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8a4ecb5-f616-4e14-a138-bcd4925ca95e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 23 05:02:06 localhost systemd[1]: libpod-975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142.scope: Deactivated successfully. Feb 23 05:02:06 localhost podman[323223]: 2026-02-23 10:02:06.501882785 +0000 UTC m=+0.048085129 container died 975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8a4ecb5-f616-4e14-a138-bcd4925ca95e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 23 05:02:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142-userdata-shm.mount: Deactivated successfully. Feb 23 05:02:06 localhost systemd[1]: var-lib-containers-storage-overlay-423dd003b1c30564bd0f3f6e9620b74fc5a7c16fd5f200e3897435fa21ac67bf-merged.mount: Deactivated successfully. Feb 23 05:02:06 localhost podman[323223]: 2026-02-23 10:02:06.584666082 +0000 UTC m=+0.130868416 container cleanup 975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8a4ecb5-f616-4e14-a138-bcd4925ca95e, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 05:02:06 localhost systemd[1]: libpod-conmon-975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142.scope: Deactivated successfully. Feb 23 05:02:06 localhost podman[323224]: 2026-02-23 10:02:06.60653131 +0000 UTC m=+0.145245826 container remove 975e16e6ed9f3d00d1d44fc2ed7be1212c678bca7ab9043b03aa35a7c4012142 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8a4ecb5-f616-4e14-a138-bcd4925ca95e, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0) Feb 23 05:02:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:06.999 263679 INFO neutron.agent.dhcp.agent [None req-39e96b87-3603-45dc-8702-08ced19b12db - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:02:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 194 MiB data, 953 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 140 KiB/s wr, 49 op/s Feb 23 05:02:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:07.323 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:02:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:02:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:02:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e213 e213: 6 total, 6 up, 6 in Feb 23 05:02:07 localhost podman[323253]: 2026-02-23 10:02:07.507564995 +0000 UTC m=+0.078820018 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter) Feb 23 05:02:07 localhost nova_compute[280321]: 2026-02-23 10:02:07.509 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:07 localhost systemd[1]: run-netns-qdhcp\x2dc8a4ecb5\x2df616\x2d4e14\x2da138\x2dbcd4925ca95e.mount: Deactivated successfully. Feb 23 05:02:07 localhost podman[323252]: 2026-02-23 10:02:07.576082387 +0000 UTC m=+0.149351032 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 05:02:07 localhost podman[323253]: 2026-02-23 10:02:07.599844793 +0000 UTC m=+0.171099826 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 23 05:02:07 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:02:07 localhost podman[323252]: 2026-02-23 10:02:07.614901692 +0000 UTC m=+0.188170297 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 05:02:07 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:02:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3ce246a5-9df7-419d-94b3-cd5751260b5f", "snap_name": "4304fccc-23e4-4511-ab8d-92fee67e08e5_64c05dae-7e24-484d-a0fc-c9c71e45796d", "force": true, "format": "json"}]: dispatch Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4304fccc-23e4-4511-ab8d-92fee67e08e5_64c05dae-7e24-484d-a0fc-c9c71e45796d, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3ce246a5-9df7-419d-94b3-cd5751260b5f/.meta.tmp' Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3ce246a5-9df7-419d-94b3-cd5751260b5f/.meta.tmp' to config b'/volumes/_nogroup/3ce246a5-9df7-419d-94b3-cd5751260b5f/.meta' Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4304fccc-23e4-4511-ab8d-92fee67e08e5_64c05dae-7e24-484d-a0fc-c9c71e45796d, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3ce246a5-9df7-419d-94b3-cd5751260b5f", "snap_name": "4304fccc-23e4-4511-ab8d-92fee67e08e5", "force": true, "format": "json"}]: dispatch Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4304fccc-23e4-4511-ab8d-92fee67e08e5, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3ce246a5-9df7-419d-94b3-cd5751260b5f/.meta.tmp' Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3ce246a5-9df7-419d-94b3-cd5751260b5f/.meta.tmp' to config b'/volumes/_nogroup/3ce246a5-9df7-419d-94b3-cd5751260b5f/.meta' Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4304fccc-23e4-4511-ab8d-92fee67e08e5, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:02:07 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1962019075' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:02:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:02:07 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1962019075' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:02:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d958ade4-5b7f-45eb-b23d-cb42046e5d2f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, vol_name:cephfs) < "" Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d958ade4-5b7f-45eb-b23d-cb42046e5d2f/.meta.tmp' Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d958ade4-5b7f-45eb-b23d-cb42046e5d2f/.meta.tmp' to config b'/volumes/_nogroup/d958ade4-5b7f-45eb-b23d-cb42046e5d2f/.meta' Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, vol_name:cephfs) < "" Feb 23 05:02:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d958ade4-5b7f-45eb-b23d-cb42046e5d2f", "format": "json"}]: dispatch Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, vol_name:cephfs) < "" Feb 23 05:02:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, vol_name:cephfs) < "" Feb 23 05:02:08 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "90cc3386-9efe-4ed5-a2c9-f8ae76a69843", "snap_name": "dc78da5f-f2b8-4261-af0a-8d02dae8fe58_20af97ff-9f0c-4a59-ae24-cffe3dd99ad7", "force": true, "format": "json"}]: dispatch Feb 23 05:02:08 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc78da5f-f2b8-4261-af0a-8d02dae8fe58_20af97ff-9f0c-4a59-ae24-cffe3dd99ad7, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:08 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta.tmp' Feb 23 05:02:08 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta.tmp' to config b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta' Feb 23 05:02:08 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc78da5f-f2b8-4261-af0a-8d02dae8fe58_20af97ff-9f0c-4a59-ae24-cffe3dd99ad7, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:08 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "90cc3386-9efe-4ed5-a2c9-f8ae76a69843", "snap_name": "dc78da5f-f2b8-4261-af0a-8d02dae8fe58", "force": true, "format": "json"}]: dispatch Feb 23 05:02:08 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc78da5f-f2b8-4261-af0a-8d02dae8fe58, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:08 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta.tmp' Feb 23 05:02:08 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta.tmp' to config b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta' Feb 23 05:02:08 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc78da5f-f2b8-4261-af0a-8d02dae8fe58, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:09 localhost nova_compute[280321]: 2026-02-23 10:02:09.024 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:09 localhost neutron_sriov_agent[256355]: 2026-02-23 10:02:09.223 2 INFO neutron.agent.securitygroups_rpc [None req-34a9c0cf-78ea-4a50-a570-1c89dfa87f59 ccd9ce6e3fef42b59d2107f1a22eac97 68a48b471ed84048aeb651374fff5111 - - default default] Security group member updated ['712b70a2-0074-4f4c-8d5a-c22b0f563b07']#033[00m Feb 23 05:02:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:09.259 263679 INFO neutron.agent.linux.ip_lib [None req-f03072c7-cc1d-41b1-85d3-1a41ffdffa5d - - - - - -] Device tap08029e5b-7f cannot be used as it has no MAC address#033[00m Feb 23 05:02:09 localhost nova_compute[280321]: 2026-02-23 10:02:09.263 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 194 MiB data, 953 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 60 KiB/s wr, 34 op/s Feb 23 05:02:09 localhost nova_compute[280321]: 2026-02-23 10:02:09.284 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:09 localhost kernel: device tap08029e5b-7f entered promiscuous mode Feb 23 05:02:09 localhost nova_compute[280321]: 2026-02-23 10:02:09.291 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:09 localhost NetworkManager[5987]: [1771840929.2945] manager: (tap08029e5b-7f): new Generic device (/org/freedesktop/NetworkManager/Devices/65) Feb 23 05:02:09 localhost ovn_controller[155966]: 2026-02-23T10:02:09Z|00362|binding|INFO|Claiming lport 08029e5b-7f35-42da-813a-c0d2a66a88d3 for this chassis. Feb 23 05:02:09 localhost ovn_controller[155966]: 2026-02-23T10:02:09Z|00363|binding|INFO|08029e5b-7f35-42da-813a-c0d2a66a88d3: Claiming unknown Feb 23 05:02:09 localhost systemd-udevd[323305]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:02:09 localhost nova_compute[280321]: 2026-02-23 10:02:09.306 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:09 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:09.304 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-940995a3-8429-4280-b73d-df96939f9fe6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-940995a3-8429-4280-b73d-df96939f9fe6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68a48b471ed84048aeb651374fff5111', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90d3f873-9316-455f-b584-b8a2bd144c63, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=08029e5b-7f35-42da-813a-c0d2a66a88d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:02:09 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:09.305 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 08029e5b-7f35-42da-813a-c0d2a66a88d3 in datapath 940995a3-8429-4280-b73d-df96939f9fe6 bound to our chassis#033[00m Feb 23 05:02:09 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:09.309 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 940995a3-8429-4280-b73d-df96939f9fe6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:02:09 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:09.310 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca1a20a-1948-4911-9096-fdff5a72e1c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:02:09 localhost ovn_controller[155966]: 2026-02-23T10:02:09Z|00364|binding|INFO|Setting lport 08029e5b-7f35-42da-813a-c0d2a66a88d3 ovn-installed in OVS Feb 23 05:02:09 localhost ovn_controller[155966]: 2026-02-23T10:02:09Z|00365|binding|INFO|Setting lport 08029e5b-7f35-42da-813a-c0d2a66a88d3 up in Southbound Feb 23 05:02:09 localhost nova_compute[280321]: 2026-02-23 10:02:09.329 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:09 localhost nova_compute[280321]: 2026-02-23 10:02:09.361 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:09 localhost nova_compute[280321]: 2026-02-23 10:02:09.389 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:10 localhost podman[323360]: Feb 23 05:02:10 localhost podman[323360]: 2026-02-23 10:02:10.085773874 +0000 UTC m=+0.082024155 container create 9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-940995a3-8429-4280-b73d-df96939f9fe6, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 05:02:10 localhost systemd[1]: Started libpod-conmon-9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec.scope. Feb 23 05:02:10 localhost systemd[1]: Started libcrun container. Feb 23 05:02:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e51eb49c121b028ffb5c0cd203164c55c190b564d5de1853d60509d19d16e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:02:10 localhost podman[323360]: 2026-02-23 10:02:10.04405279 +0000 UTC m=+0.040303071 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:02:10 localhost podman[323360]: 2026-02-23 10:02:10.150606484 +0000 UTC m=+0.146856735 container init 9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-940995a3-8429-4280-b73d-df96939f9fe6, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 05:02:10 localhost podman[323360]: 2026-02-23 10:02:10.159724602 +0000 UTC m=+0.155974853 container start 9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-940995a3-8429-4280-b73d-df96939f9fe6, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 05:02:10 localhost dnsmasq[323379]: started, version 2.85 cachesize 150 Feb 23 05:02:10 localhost dnsmasq[323379]: DNS service limited to local subnets Feb 23 05:02:10 localhost dnsmasq[323379]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:02:10 localhost dnsmasq[323379]: warning: no upstream servers configured Feb 23 05:02:10 localhost dnsmasq-dhcp[323379]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 05:02:10 localhost dnsmasq[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/addn_hosts - 0 addresses Feb 23 05:02:10 localhost dnsmasq-dhcp[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/host Feb 23 05:02:10 localhost dnsmasq-dhcp[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/opts Feb 23 05:02:10 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:10.224 263679 INFO neutron.agent.dhcp.agent [None req-f03072c7-cc1d-41b1-85d3-1a41ffdffa5d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:02:08Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8216cea6-3bce-49ec-9369-0529b00e59b9, ip_allocation=immediate, mac_address=fa:16:3e:0d:b2:77, name=tempest-RoutersIpV6Test-901599479, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:02:07Z, description=, dns_domain=, id=940995a3-8429-4280-b73d-df96939f9fe6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2086256665, port_security_enabled=True, project_id=68a48b471ed84048aeb651374fff5111, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62832, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2983, status=ACTIVE, subnets=['ef8779ef-e5be-4e08-abe9-edb3a6d800f8'], tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:08Z, vlan_transparent=None, network_id=940995a3-8429-4280-b73d-df96939f9fe6, port_security_enabled=True, project_id=68a48b471ed84048aeb651374fff5111, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['712b70a2-0074-4f4c-8d5a-c22b0f563b07'], standard_attr_id=2989, status=DOWN, tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:09Z on network 940995a3-8429-4280-b73d-df96939f9fe6#033[00m Feb 23 05:02:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e214 e214: 6 total, 6 up, 6 in Feb 23 05:02:10 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:10.294 263679 INFO neutron.agent.dhcp.agent [None req-7e516e83-0ee6-472e-bf99-165c6551e9bd - - - - - -] DHCP configuration for ports {'67190160-7bca-4846-9a69-b6cb4579b8e5'} is completed#033[00m Feb 23 05:02:10 localhost dnsmasq[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/addn_hosts - 1 addresses Feb 23 05:02:10 localhost dnsmasq-dhcp[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/host Feb 23 05:02:10 localhost dnsmasq-dhcp[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/opts Feb 23 05:02:10 localhost podman[323398]: 2026-02-23 10:02:10.412024557 +0000 UTC m=+0.057204428 container kill 9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-940995a3-8429-4280-b73d-df96939f9fe6, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 05:02:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:10 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:10.638 263679 INFO neutron.agent.dhcp.agent [None req-aef2c826-1078-40fc-94fd-a5c5a8b03393 - - - - - -] DHCP configuration for ports {'8216cea6-3bce-49ec-9369-0529b00e59b9'} is completed#033[00m Feb 23 05:02:10 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:10.734 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:02:08Z, description=, device_id=a0ee630c-9930-473a-9bef-be387001d669, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8216cea6-3bce-49ec-9369-0529b00e59b9, ip_allocation=immediate, mac_address=fa:16:3e:0d:b2:77, name=tempest-RoutersIpV6Test-901599479, network_id=940995a3-8429-4280-b73d-df96939f9fe6, port_security_enabled=True, project_id=68a48b471ed84048aeb651374fff5111, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['712b70a2-0074-4f4c-8d5a-c22b0f563b07'], standard_attr_id=2989, status=DOWN, tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:09Z on network 940995a3-8429-4280-b73d-df96939f9fe6#033[00m Feb 23 05:02:10 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3ce246a5-9df7-419d-94b3-cd5751260b5f", "format": "json"}]: dispatch Feb 23 05:02:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:10 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:10.817+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3ce246a5-9df7-419d-94b3-cd5751260b5f' of type subvolume Feb 23 05:02:10 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3ce246a5-9df7-419d-94b3-cd5751260b5f' of type subvolume Feb 23 05:02:10 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3ce246a5-9df7-419d-94b3-cd5751260b5f", "force": true, "format": "json"}]: dispatch Feb 23 05:02:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3ce246a5-9df7-419d-94b3-cd5751260b5f'' moved to trashcan Feb 23 05:02:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3ce246a5-9df7-419d-94b3-cd5751260b5f, vol_name:cephfs) < "" Feb 23 05:02:10 localhost dnsmasq[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/addn_hosts - 1 addresses Feb 23 05:02:10 localhost dnsmasq-dhcp[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/host Feb 23 05:02:10 localhost podman[323438]: 2026-02-23 10:02:10.885518996 +0000 UTC m=+0.057746745 container kill 9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-940995a3-8429-4280-b73d-df96939f9fe6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 05:02:10 localhost dnsmasq-dhcp[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/opts Feb 23 05:02:11 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:11.097 263679 INFO neutron.agent.dhcp.agent [None req-6ecec748-208c-46ff-bbac-923cf196bba5 - - - - - -] DHCP configuration for ports {'8216cea6-3bce-49ec-9369-0529b00e59b9'} is completed#033[00m Feb 23 05:02:11 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d958ade4-5b7f-45eb-b23d-cb42046e5d2f", "auth_id": "david", "tenant_id": "4d2b2d5862b8427aac5a9c709976e3ff", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:02:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, tenant_id:4d2b2d5862b8427aac5a9c709976e3ff, vol_name:cephfs) < "" Feb 23 05:02:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Feb 23 05:02:11 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 23 05:02:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e215 e215: 6 total, 6 up, 6 in Feb 23 05:02:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 194 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 127 KiB/s wr, 52 op/s Feb 23 05:02:11 localhost ceph-mgr[285904]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use Feb 23 05:02:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, tenant_id:4d2b2d5862b8427aac5a9c709976e3ff, vol_name:cephfs) < "" Feb 23 05:02:11 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:11.288+0000 7fc3ba4ad640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Feb 23 05:02:11 localhost ceph-mgr[285904]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Feb 23 05:02:11 localhost neutron_sriov_agent[256355]: 2026-02-23 10:02:11.788 2 INFO neutron.agent.securitygroups_rpc [None req-90d4154c-098d-45e4-8f77-47336730be40 ccd9ce6e3fef42b59d2107f1a22eac97 68a48b471ed84048aeb651374fff5111 - - default default] Security group member updated ['712b70a2-0074-4f4c-8d5a-c22b0f563b07']#033[00m Feb 23 05:02:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:02:11 localhost systemd[1]: tmp-crun.1kjNBS.mount: Deactivated successfully. Feb 23 05:02:11 localhost dnsmasq[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/addn_hosts - 0 addresses Feb 23 05:02:11 localhost dnsmasq-dhcp[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/host Feb 23 05:02:11 localhost podman[323476]: 2026-02-23 10:02:11.971042934 +0000 UTC m=+0.070062311 container kill 9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-940995a3-8429-4280-b73d-df96939f9fe6, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 23 05:02:11 localhost dnsmasq-dhcp[323379]: read /var/lib/neutron/dhcp/940995a3-8429-4280-b73d-df96939f9fe6/opts Feb 23 05:02:12 localhost podman[323484]: 2026-02-23 10:02:12.019157263 +0000 UTC m=+0.091135284 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 23 05:02:12 localhost podman[323484]: 2026-02-23 10:02:12.082970782 +0000 UTC m=+0.154948783 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 05:02:12 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:02:12 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "90cc3386-9efe-4ed5-a2c9-f8ae76a69843", "snap_name": "e9f999c5-8797-4bf3-86d8-5475823561e0_76bf6727-66ba-4b02-aadb-4af6ec6b8ba3", "force": true, "format": "json"}]: dispatch Feb 23 05:02:12 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9f999c5-8797-4bf3-86d8-5475823561e0_76bf6727-66ba-4b02-aadb-4af6ec6b8ba3, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:12 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta.tmp' Feb 23 05:02:12 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta.tmp' to config b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta' Feb 23 05:02:12 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9f999c5-8797-4bf3-86d8-5475823561e0_76bf6727-66ba-4b02-aadb-4af6ec6b8ba3, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:12 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "90cc3386-9efe-4ed5-a2c9-f8ae76a69843", "snap_name": "e9f999c5-8797-4bf3-86d8-5475823561e0", "force": true, "format": "json"}]: dispatch Feb 23 05:02:12 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9f999c5-8797-4bf3-86d8-5475823561e0, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:12 localhost ovn_controller[155966]: 2026-02-23T10:02:12Z|00366|binding|INFO|Releasing lport 08029e5b-7f35-42da-813a-c0d2a66a88d3 from this chassis (sb_readonly=0) Feb 23 05:02:12 localhost ovn_controller[155966]: 2026-02-23T10:02:12Z|00367|binding|INFO|Setting lport 08029e5b-7f35-42da-813a-c0d2a66a88d3 down in Southbound Feb 23 05:02:12 localhost nova_compute[280321]: 2026-02-23 10:02:12.140 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:12 localhost kernel: device tap08029e5b-7f left promiscuous mode Feb 23 05:02:12 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:12.150 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-940995a3-8429-4280-b73d-df96939f9fe6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-940995a3-8429-4280-b73d-df96939f9fe6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68a48b471ed84048aeb651374fff5111', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90d3f873-9316-455f-b584-b8a2bd144c63, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=08029e5b-7f35-42da-813a-c0d2a66a88d3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:02:12 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:12.152 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 08029e5b-7f35-42da-813a-c0d2a66a88d3 in datapath 940995a3-8429-4280-b73d-df96939f9fe6 unbound from our chassis#033[00m Feb 23 05:02:12 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:12.153 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 940995a3-8429-4280-b73d-df96939f9fe6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:02:12 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:12.154 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[9f007018-20fa-434a-b810-55c3af6354fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:02:12 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta.tmp' Feb 23 05:02:12 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta.tmp' to config b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843/.meta' Feb 23 05:02:12 localhost nova_compute[280321]: 2026-02-23 10:02:12.166 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:12 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9f999c5-8797-4bf3-86d8-5475823561e0, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:12 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 23 05:02:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e216 e216: 6 total, 6 up, 6 in Feb 23 05:02:12 localhost podman[241086]: time="2026-02-23T10:02:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:02:12 localhost podman[241086]: @ - - [23/Feb/2026:10:02:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155887 "" "Go-http-client/1.1" Feb 23 05:02:12 localhost podman[241086]: @ - - [23/Feb/2026:10:02:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18296 "" "Go-http-client/1.1" Feb 23 05:02:12 localhost dnsmasq[323379]: exiting on receipt of SIGTERM Feb 23 05:02:12 localhost podman[323540]: 2026-02-23 10:02:12.937664291 +0000 UTC m=+0.061107217 container kill 9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-940995a3-8429-4280-b73d-df96939f9fe6, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 05:02:12 localhost systemd[1]: libpod-9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec.scope: Deactivated successfully. Feb 23 05:02:12 localhost systemd[1]: tmp-crun.zpiGP0.mount: Deactivated successfully. Feb 23 05:02:13 localhost podman[323552]: 2026-02-23 10:02:13.010157255 +0000 UTC m=+0.057973312 container died 9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-940995a3-8429-4280-b73d-df96939f9fe6, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:02:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec-userdata-shm.mount: Deactivated successfully. Feb 23 05:02:13 localhost podman[323552]: 2026-02-23 10:02:13.042906445 +0000 UTC m=+0.090722462 container cleanup 9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-940995a3-8429-4280-b73d-df96939f9fe6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 23 05:02:13 localhost systemd[1]: libpod-conmon-9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec.scope: Deactivated successfully. Feb 23 05:02:13 localhost podman[323554]: 2026-02-23 10:02:13.095634354 +0000 UTC m=+0.137354655 container remove 9ad257928bc6a8216f04d67d289d975e452d471f96844418ba3378bb7b2853ec (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-940995a3-8429-4280-b73d-df96939f9fe6, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:02:13 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:13.121 263679 INFO neutron.agent.dhcp.agent [None req-90348242-3723-4cd4-a095-b5de112627aa - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:02:13 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:13.247 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:02:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 194 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 62 KiB/s wr, 36 op/s Feb 23 05:02:13 localhost nova_compute[280321]: 2026-02-23 10:02:13.575 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:13 localhost systemd[1]: var-lib-containers-storage-overlay-02e51eb49c121b028ffb5c0cd203164c55c190b564d5de1853d60509d19d16e5-merged.mount: Deactivated successfully. Feb 23 05:02:13 localhost systemd[1]: run-netns-qdhcp\x2d940995a3\x2d8429\x2d4280\x2db73d\x2ddf96939f9fe6.mount: Deactivated successfully. Feb 23 05:02:14 localhost nova_compute[280321]: 2026-02-23 10:02:14.063 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:14 localhost nova_compute[280321]: 2026-02-23 10:02:14.265 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:14 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d958ade4-5b7f-45eb-b23d-cb42046e5d2f", "auth_id": "david", "format": "json"}]: dispatch Feb 23 05:02:14 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, vol_name:cephfs) < "" Feb 23 05:02:14 localhost ceph-mgr[285904]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume 'd958ade4-5b7f-45eb-b23d-cb42046e5d2f' Feb 23 05:02:14 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, vol_name:cephfs) < "" Feb 23 05:02:14 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d958ade4-5b7f-45eb-b23d-cb42046e5d2f", "auth_id": "david", "format": "json"}]: dispatch Feb 23 05:02:14 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, vol_name:cephfs) < "" Feb 23 05:02:14 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/d958ade4-5b7f-45eb-b23d-cb42046e5d2f/7b3f1ec9-37eb-4781-a710-21b4c27d3f21 Feb 23 05:02:14 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:02:14 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, vol_name:cephfs) < "" Feb 23 05:02:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 194 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 60 KiB/s wr, 34 op/s Feb 23 05:02:15 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "90cc3386-9efe-4ed5-a2c9-f8ae76a69843", "format": "json"}]: dispatch Feb 23 05:02:15 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:15 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:15 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '90cc3386-9efe-4ed5-a2c9-f8ae76a69843' of type subvolume Feb 23 05:02:15 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:15.338+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '90cc3386-9efe-4ed5-a2c9-f8ae76a69843' of type subvolume Feb 23 05:02:15 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "90cc3386-9efe-4ed5-a2c9-f8ae76a69843", "force": true, "format": "json"}]: dispatch Feb 23 05:02:15 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/90cc3386-9efe-4ed5-a2c9-f8ae76a69843'' moved to trashcan Feb 23 05:02:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:15 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:90cc3386-9efe-4ed5-a2c9-f8ae76a69843, vol_name:cephfs) < "" Feb 23 05:02:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 195 MiB data, 973 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 97 KiB/s wr, 36 op/s Feb 23 05:02:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e217 e217: 6 total, 6 up, 6 in Feb 23 05:02:17 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4f53052b-0441-47a6-9bd6-84191fbc6dcb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:02:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4f53052b-0441-47a6-9bd6-84191fbc6dcb, vol_name:cephfs) < "" Feb 23 05:02:17 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4f53052b-0441-47a6-9bd6-84191fbc6dcb/.meta.tmp' Feb 23 05:02:17 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4f53052b-0441-47a6-9bd6-84191fbc6dcb/.meta.tmp' to config b'/volumes/_nogroup/4f53052b-0441-47a6-9bd6-84191fbc6dcb/.meta' Feb 23 05:02:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4f53052b-0441-47a6-9bd6-84191fbc6dcb, vol_name:cephfs) < "" Feb 23 05:02:17 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4f53052b-0441-47a6-9bd6-84191fbc6dcb", "format": "json"}]: dispatch Feb 23 05:02:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4f53052b-0441-47a6-9bd6-84191fbc6dcb, vol_name:cephfs) < "" Feb 23 05:02:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4f53052b-0441-47a6-9bd6-84191fbc6dcb, vol_name:cephfs) < "" Feb 23 05:02:18 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "auth_id": "david", "format": "json"}]: dispatch Feb 23 05:02:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:02:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Feb 23 05:02:18 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 23 05:02:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) Feb 23 05:02:18 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Feb 23 05:02:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:02:18 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "auth_id": "david", "format": "json"}]: dispatch Feb 23 05:02:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:02:18 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4/555eb612-b64e-47a9-a736-f6e77cc4a819 Feb 23 05:02:18 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:02:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:02:18 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 23 05:02:18 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Feb 23 05:02:18 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Feb 23 05:02:18 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Feb 23 05:02:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e218 e218: 6 total, 6 up, 6 in Feb 23 05:02:19 localhost nova_compute[280321]: 2026-02-23 10:02:19.093 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:19 localhost nova_compute[280321]: 2026-02-23 10:02:19.266 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 195 MiB data, 973 MiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 47 KiB/s wr, 6 op/s Feb 23 05:02:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:02:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:02:20 localhost systemd[1]: tmp-crun.uCqx35.mount: Deactivated successfully. Feb 23 05:02:20 localhost podman[323586]: 2026-02-23 10:02:20.01477206 +0000 UTC m=+0.094225718 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible) Feb 23 05:02:20 localhost podman[323586]: 2026-02-23 10:02:20.044516788 +0000 UTC m=+0.123970486 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 05:02:20 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:02:20 localhost podman[323587]: 2026-02-23 10:02:20.059166716 +0000 UTC m=+0.138136059 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:02:20 localhost podman[323587]: 2026-02-23 10:02:20.06781462 +0000 UTC m=+0.146784013 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute) Feb 23 05:02:20 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:02:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "22eec113-2c60-4be1-9ecd-9ef9c2418dbe", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:02:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:22eec113-2c60-4be1-9ecd-9ef9c2418dbe, vol_name:cephfs) < "" Feb 23 05:02:20 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/22eec113-2c60-4be1-9ecd-9ef9c2418dbe/.meta.tmp' Feb 23 05:02:20 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/22eec113-2c60-4be1-9ecd-9ef9c2418dbe/.meta.tmp' to config b'/volumes/_nogroup/22eec113-2c60-4be1-9ecd-9ef9c2418dbe/.meta' Feb 23 05:02:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:22eec113-2c60-4be1-9ecd-9ef9c2418dbe, vol_name:cephfs) < "" Feb 23 05:02:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "22eec113-2c60-4be1-9ecd-9ef9c2418dbe", "format": "json"}]: dispatch Feb 23 05:02:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:22eec113-2c60-4be1-9ecd-9ef9c2418dbe, vol_name:cephfs) < "" Feb 23 05:02:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:22eec113-2c60-4be1-9ecd-9ef9c2418dbe, vol_name:cephfs) < "" Feb 23 05:02:21 localhost systemd[1]: tmp-crun.CRe3pj.mount: Deactivated successfully. Feb 23 05:02:21 localhost sshd[323624]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:02:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 195 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 3.9 KiB/s rd, 82 KiB/s wr, 15 op/s Feb 23 05:02:22 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d958ade4-5b7f-45eb-b23d-cb42046e5d2f", "format": "json"}]: dispatch Feb 23 05:02:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:22 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd958ade4-5b7f-45eb-b23d-cb42046e5d2f' of type subvolume Feb 23 05:02:22 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:22.326+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd958ade4-5b7f-45eb-b23d-cb42046e5d2f' of type subvolume Feb 23 05:02:22 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d958ade4-5b7f-45eb-b23d-cb42046e5d2f", "force": true, "format": "json"}]: dispatch Feb 23 05:02:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, vol_name:cephfs) < "" Feb 23 05:02:22 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d958ade4-5b7f-45eb-b23d-cb42046e5d2f'' moved to trashcan Feb 23 05:02:22 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d958ade4-5b7f-45eb-b23d-cb42046e5d2f, vol_name:cephfs) < "" Feb 23 05:02:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 195 MiB data, 992 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 83 KiB/s wr, 31 op/s Feb 23 05:02:24 localhost nova_compute[280321]: 2026-02-23 10:02:24.130 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:24 localhost nova_compute[280321]: 2026-02-23 10:02:24.268 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/.meta.tmp' Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/.meta.tmp' to config b'/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/.meta' Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:24 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:24.302 263679 INFO neutron.agent.linux.ip_lib [None req-66d6c7eb-0b73-4f4d-964f-feef6c1ecad0 - - - - - -] Device tapf1e5f61e-c5 cannot be used as it has no MAC address#033[00m Feb 23 05:02:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "format": "json"}]: dispatch Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:24 localhost nova_compute[280321]: 2026-02-23 10:02:24.326 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:24 localhost kernel: device tapf1e5f61e-c5 entered promiscuous mode Feb 23 05:02:24 localhost NetworkManager[5987]: [1771840944.3349] manager: (tapf1e5f61e-c5): new Generic device (/org/freedesktop/NetworkManager/Devices/66) Feb 23 05:02:24 localhost nova_compute[280321]: 2026-02-23 10:02:24.336 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:24 localhost systemd-udevd[323636]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:02:24 localhost ovn_controller[155966]: 2026-02-23T10:02:24Z|00368|binding|INFO|Claiming lport f1e5f61e-c5a0-4d3b-a262-38bc207ad27d for this chassis. Feb 23 05:02:24 localhost ovn_controller[155966]: 2026-02-23T10:02:24Z|00369|binding|INFO|f1e5f61e-c5a0-4d3b-a262-38bc207ad27d: Claiming unknown Feb 23 05:02:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:24.352 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-df180ca2-6eb1-491a-8509-1e0aaeaaf97d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df180ca2-6eb1-491a-8509-1e0aaeaaf97d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68a48b471ed84048aeb651374fff5111', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6530a409-d9d5-4f5d-b89f-689f6353588e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f1e5f61e-c5a0-4d3b-a262-38bc207ad27d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:02:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:24.355 161842 INFO neutron.agent.ovn.metadata.agent [-] Port f1e5f61e-c5a0-4d3b-a262-38bc207ad27d in datapath df180ca2-6eb1-491a-8509-1e0aaeaaf97d bound to our chassis#033[00m Feb 23 05:02:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:24.357 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df180ca2-6eb1-491a-8509-1e0aaeaaf97d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:02:24 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:24.358 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[73aa13d7-ddf1-4de9-b5fb-1b83b7746328]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:02:24 localhost journal[229268]: ethtool ioctl error on tapf1e5f61e-c5: No such device Feb 23 05:02:24 localhost journal[229268]: ethtool ioctl error on tapf1e5f61e-c5: No such device Feb 23 05:02:24 localhost nova_compute[280321]: 2026-02-23 10:02:24.372 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:24 localhost journal[229268]: ethtool ioctl error on tapf1e5f61e-c5: No such device Feb 23 05:02:24 localhost ovn_controller[155966]: 2026-02-23T10:02:24Z|00370|binding|INFO|Setting lport f1e5f61e-c5a0-4d3b-a262-38bc207ad27d ovn-installed in OVS Feb 23 05:02:24 localhost ovn_controller[155966]: 2026-02-23T10:02:24Z|00371|binding|INFO|Setting lport f1e5f61e-c5a0-4d3b-a262-38bc207ad27d up in Southbound Feb 23 05:02:24 localhost nova_compute[280321]: 2026-02-23 10:02:24.379 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:24 localhost journal[229268]: ethtool ioctl error on tapf1e5f61e-c5: No such device Feb 23 05:02:24 localhost journal[229268]: ethtool ioctl error on tapf1e5f61e-c5: No such device Feb 23 05:02:24 localhost journal[229268]: ethtool ioctl error on tapf1e5f61e-c5: No such device Feb 23 05:02:24 localhost journal[229268]: ethtool ioctl error on tapf1e5f61e-c5: No such device Feb 23 05:02:24 localhost nova_compute[280321]: 2026-02-23 10:02:24.402 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:24 localhost journal[229268]: ethtool ioctl error on tapf1e5f61e-c5: No such device Feb 23 05:02:24 localhost nova_compute[280321]: 2026-02-23 10:02:24.433 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4f53052b-0441-47a6-9bd6-84191fbc6dcb", "format": "json"}]: dispatch Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4f53052b-0441-47a6-9bd6-84191fbc6dcb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4f53052b-0441-47a6-9bd6-84191fbc6dcb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:24 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:24.808+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4f53052b-0441-47a6-9bd6-84191fbc6dcb' of type subvolume Feb 23 05:02:24 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4f53052b-0441-47a6-9bd6-84191fbc6dcb' of type subvolume Feb 23 05:02:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4f53052b-0441-47a6-9bd6-84191fbc6dcb", "force": true, "format": "json"}]: dispatch Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4f53052b-0441-47a6-9bd6-84191fbc6dcb, vol_name:cephfs) < "" Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4f53052b-0441-47a6-9bd6-84191fbc6dcb'' moved to trashcan Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4f53052b-0441-47a6-9bd6-84191fbc6dcb, vol_name:cephfs) < "" Feb 23 05:02:24 localhost nova_compute[280321]: 2026-02-23 10:02:24.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:02:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "22eec113-2c60-4be1-9ecd-9ef9c2418dbe", "format": "json"}]: dispatch Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:22eec113-2c60-4be1-9ecd-9ef9c2418dbe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:22eec113-2c60-4be1-9ecd-9ef9c2418dbe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:24 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:24.964+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '22eec113-2c60-4be1-9ecd-9ef9c2418dbe' of type subvolume Feb 23 05:02:24 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '22eec113-2c60-4be1-9ecd-9ef9c2418dbe' of type subvolume Feb 23 05:02:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "22eec113-2c60-4be1-9ecd-9ef9c2418dbe", "force": true, "format": "json"}]: dispatch Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:22eec113-2c60-4be1-9ecd-9ef9c2418dbe, vol_name:cephfs) < "" Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/22eec113-2c60-4be1-9ecd-9ef9c2418dbe'' moved to trashcan Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:22eec113-2c60-4be1-9ecd-9ef9c2418dbe, vol_name:cephfs) < "" Feb 23 05:02:25 localhost podman[323707]: Feb 23 05:02:25 localhost podman[323707]: 2026-02-23 10:02:25.237766571 +0000 UTC m=+0.089957397 container create 3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df180ca2-6eb1-491a-8509-1e0aaeaaf97d, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:02:25 localhost systemd[1]: Started libpod-conmon-3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3.scope. Feb 23 05:02:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v475: 177 pgs: 177 active+clean; 195 MiB data, 992 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 43 KiB/s wr, 25 op/s Feb 23 05:02:25 localhost podman[323707]: 2026-02-23 10:02:25.194033345 +0000 UTC m=+0.046224211 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:02:25 localhost systemd[1]: Started libcrun container. Feb 23 05:02:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1124c2de9baed349ff45d7c72a07adc06d21ce8c999d1e7ddf9ccadb499254a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:02:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bedacb3b-517e-43b1-b025-790f9bc892fc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, vol_name:cephfs) < "" Feb 23 05:02:25 localhost podman[323707]: 2026-02-23 10:02:25.326011106 +0000 UTC m=+0.178202002 container init 3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df180ca2-6eb1-491a-8509-1e0aaeaaf97d, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:02:25 localhost dnsmasq[323736]: started, version 2.85 cachesize 150 Feb 23 05:02:25 localhost dnsmasq[323736]: DNS service limited to local subnets Feb 23 05:02:25 localhost dnsmasq[323736]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:02:25 localhost dnsmasq[323736]: warning: no upstream servers configured Feb 23 05:02:25 localhost dnsmasq-dhcp[323736]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 23 05:02:25 localhost dnsmasq[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/addn_hosts - 0 addresses Feb 23 05:02:25 localhost dnsmasq-dhcp[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/host Feb 23 05:02:25 localhost dnsmasq-dhcp[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/opts Feb 23 05:02:25 localhost podman[323720]: 2026-02-23 10:02:25.372219357 +0000 UTC m=+0.092069573 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 05:02:25 localhost podman[323720]: 2026-02-23 10:02:25.383601885 +0000 UTC m=+0.103452081 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bedacb3b-517e-43b1-b025-790f9bc892fc/.meta.tmp' Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bedacb3b-517e-43b1-b025-790f9bc892fc/.meta.tmp' to config b'/volumes/_nogroup/bedacb3b-517e-43b1-b025-790f9bc892fc/.meta' Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, vol_name:cephfs) < "" Feb 23 05:02:25 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:02:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bedacb3b-517e-43b1-b025-790f9bc892fc", "format": "json"}]: dispatch Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, vol_name:cephfs) < "" Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, vol_name:cephfs) < "" Feb 23 05:02:25 localhost podman[323707]: 2026-02-23 10:02:25.437524351 +0000 UTC m=+0.289715177 container start 3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df180ca2-6eb1-491a-8509-1e0aaeaaf97d, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0) Feb 23 05:02:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:25.494 263679 INFO neutron.agent.dhcp.agent [None req-66d6c7eb-0b73-4f4d-964f-feef6c1ecad0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:02:24Z, description=, device_id=fef3d056-ba45-44e4-9db7-8c295e33590e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=53d5087b-b0ce-45f1-ac2f-90c3377424be, ip_allocation=immediate, mac_address=fa:16:3e:92:de:a8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:02:22Z, description=, dns_domain=, id=df180ca2-6eb1-491a-8509-1e0aaeaaf97d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1220471088, port_security_enabled=True, project_id=68a48b471ed84048aeb651374fff5111, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16627, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3064, status=ACTIVE, subnets=['42a7003e-4ea1-4a8f-84ba-215fad438438'], tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:23Z, vlan_transparent=None, network_id=df180ca2-6eb1-491a-8509-1e0aaeaaf97d, port_security_enabled=False, project_id=68a48b471ed84048aeb651374fff5111, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3070, status=DOWN, tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:24Z on network df180ca2-6eb1-491a-8509-1e0aaeaaf97d#033[00m Feb 23 05:02:25 localhost dnsmasq[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/addn_hosts - 1 addresses Feb 23 05:02:25 localhost dnsmasq-dhcp[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/host Feb 23 05:02:25 localhost dnsmasq-dhcp[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/opts Feb 23 05:02:25 localhost podman[323767]: 2026-02-23 10:02:25.661845531 +0000 UTC m=+0.044956214 container kill 3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df180ca2-6eb1-491a-8509-1e0aaeaaf97d, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 05:02:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c3aedd71-b342-4920-afd2-d5c6fd4776d2", "format": "json"}]: dispatch Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:25 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:25.763+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c3aedd71-b342-4920-afd2-d5c6fd4776d2' of type subvolume Feb 23 05:02:25 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c3aedd71-b342-4920-afd2-d5c6fd4776d2' of type subvolume Feb 23 05:02:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c3aedd71-b342-4920-afd2-d5c6fd4776d2", "force": true, "format": "json"}]: dispatch Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:02:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:25.780 263679 INFO neutron.agent.dhcp.agent [None req-66d6c7eb-0b73-4f4d-964f-feef6c1ecad0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:02:24Z, description=, device_id=fef3d056-ba45-44e4-9db7-8c295e33590e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=53d5087b-b0ce-45f1-ac2f-90c3377424be, ip_allocation=immediate, mac_address=fa:16:3e:92:de:a8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:02:22Z, description=, dns_domain=, id=df180ca2-6eb1-491a-8509-1e0aaeaaf97d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1220471088, port_security_enabled=True, project_id=68a48b471ed84048aeb651374fff5111, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16627, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3064, status=ACTIVE, subnets=['42a7003e-4ea1-4a8f-84ba-215fad438438'], tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:23Z, vlan_transparent=None, network_id=df180ca2-6eb1-491a-8509-1e0aaeaaf97d, port_security_enabled=False, project_id=68a48b471ed84048aeb651374fff5111, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3070, status=DOWN, tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:24Z on network df180ca2-6eb1-491a-8509-1e0aaeaaf97d#033[00m Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c3aedd71-b342-4920-afd2-d5c6fd4776d2'' moved to trashcan Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c3aedd71-b342-4920-afd2-d5c6fd4776d2, vol_name:cephfs) < "" Feb 23 05:02:25 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:25.865 263679 INFO neutron.agent.dhcp.agent [None req-f8a8348c-0322-4d52-b481-e3eb079b18a3 - - - - - -] DHCP configuration for ports {'e447ca51-64c3-468d-91e4-651cfbe8a1cc'} is completed#033[00m Feb 23 05:02:25 localhost dnsmasq[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/addn_hosts - 1 addresses Feb 23 05:02:25 localhost podman[323807]: 2026-02-23 10:02:25.963806422 +0000 UTC m=+0.060597652 container kill 3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df180ca2-6eb1-491a-8509-1e0aaeaaf97d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 23 05:02:25 localhost dnsmasq-dhcp[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/host Feb 23 05:02:25 localhost dnsmasq-dhcp[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/opts Feb 23 05:02:26 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:26.006 263679 INFO neutron.agent.dhcp.agent [None req-5cb277d5-24ea-43f9-af09-b43611e33e59 - - - - - -] DHCP configuration for ports {'53d5087b-b0ce-45f1-ac2f-90c3377424be'} is completed#033[00m Feb 23 05:02:26 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:26.142 263679 INFO neutron.agent.dhcp.agent [None req-48b26c1d-4327-47c0-8c3e-45229707a41a - - - - - -] DHCP configuration for ports {'53d5087b-b0ce-45f1-ac2f-90c3377424be'} is completed#033[00m Feb 23 05:02:26 localhost nova_compute[280321]: 2026-02-23 10:02:26.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:02:26 localhost nova_compute[280321]: 2026-02-23 10:02:26.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 05:02:26 localhost nova_compute[280321]: 2026-02-23 10:02:26.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 05:02:26 localhost nova_compute[280321]: 2026-02-23 10:02:26.907 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 05:02:26 localhost nova_compute[280321]: 2026-02-23 10:02:26.907 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:02:26 localhost nova_compute[280321]: 2026-02-23 10:02:26.908 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 05:02:26 localhost systemd[1]: tmp-crun.tiS8q3.mount: Deactivated successfully. Feb 23 05:02:26 localhost dnsmasq[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/addn_hosts - 0 addresses Feb 23 05:02:26 localhost dnsmasq-dhcp[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/host Feb 23 05:02:26 localhost dnsmasq-dhcp[323736]: read /var/lib/neutron/dhcp/df180ca2-6eb1-491a-8509-1e0aaeaaf97d/opts Feb 23 05:02:26 localhost podman[323847]: 2026-02-23 10:02:26.969699418 +0000 UTC m=+0.068682798 container kill 3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df180ca2-6eb1-491a-8509-1e0aaeaaf97d, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 23 05:02:27 localhost nova_compute[280321]: 2026-02-23 10:02:27.115 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:27 localhost ovn_controller[155966]: 2026-02-23T10:02:27Z|00372|binding|INFO|Releasing lport f1e5f61e-c5a0-4d3b-a262-38bc207ad27d from this chassis (sb_readonly=0) Feb 23 05:02:27 localhost kernel: device tapf1e5f61e-c5 left promiscuous mode Feb 23 05:02:27 localhost ovn_controller[155966]: 2026-02-23T10:02:27Z|00373|binding|INFO|Setting lport f1e5f61e-c5a0-4d3b-a262-38bc207ad27d down in Southbound Feb 23 05:02:27 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:27.125 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-df180ca2-6eb1-491a-8509-1e0aaeaaf97d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df180ca2-6eb1-491a-8509-1e0aaeaaf97d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68a48b471ed84048aeb651374fff5111', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6530a409-d9d5-4f5d-b89f-689f6353588e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f1e5f61e-c5a0-4d3b-a262-38bc207ad27d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:02:27 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:27.127 161842 INFO neutron.agent.ovn.metadata.agent [-] Port f1e5f61e-c5a0-4d3b-a262-38bc207ad27d in datapath df180ca2-6eb1-491a-8509-1e0aaeaaf97d unbound from our chassis#033[00m Feb 23 05:02:27 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:27.129 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df180ca2-6eb1-491a-8509-1e0aaeaaf97d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:02:27 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:27.130 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[6592b6ce-441d-4aed-92bd-e32ed7d2a468]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:02:27 localhost nova_compute[280321]: 2026-02-23 10:02:27.143 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 196 MiB data, 993 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 103 KiB/s wr, 28 op/s Feb 23 05:02:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e219 e219: 6 total, 6 up, 6 in Feb 23 05:02:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "auth_id": "eve49", "tenant_id": "f47d5caa97d244edb5aef31a3870507a", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:02:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, tenant_id:f47d5caa97d244edb5aef31a3870507a, vol_name:cephfs) < "" Feb 23 05:02:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Feb 23 05:02:27 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 23 05:02:27 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID eve49 with tenant f47d5caa97d244edb5aef31a3870507a Feb 23 05:02:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:02:27 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, tenant_id:f47d5caa97d244edb5aef31a3870507a, vol_name:cephfs) < "" Feb 23 05:02:27 localhost nova_compute[280321]: 2026-02-23 10:02:27.906 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:02:28 localhost dnsmasq[323736]: exiting on receipt of SIGTERM Feb 23 05:02:28 localhost systemd[1]: libpod-3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3.scope: Deactivated successfully. Feb 23 05:02:28 localhost podman[323953]: 2026-02-23 10:02:28.079059174 +0000 UTC m=+0.059014813 container kill 3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df180ca2-6eb1-491a-8509-1e0aaeaaf97d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:02:28 localhost podman[323967]: 2026-02-23 10:02:28.139809209 +0000 UTC m=+0.051666599 container died 3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df180ca2-6eb1-491a-8509-1e0aaeaaf97d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 05:02:28 localhost systemd[1]: tmp-crun.JAhdX7.mount: Deactivated successfully. Feb 23 05:02:28 localhost podman[323967]: 2026-02-23 10:02:28.24592059 +0000 UTC m=+0.157777930 container cleanup 3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df180ca2-6eb1-491a-8509-1e0aaeaaf97d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 05:02:28 localhost systemd[1]: libpod-conmon-3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3.scope: Deactivated successfully. Feb 23 05:02:28 localhost podman[323974]: 2026-02-23 10:02:28.266799117 +0000 UTC m=+0.162027879 container remove 3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df180ca2-6eb1-491a-8509-1e0aaeaaf97d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0) Feb 23 05:02:28 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:28.305 263679 INFO neutron.agent.dhcp.agent [None req-69e46714-0444-41f2-9414-3e8718b7bee0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:02:28 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:28.306 263679 INFO neutron.agent.dhcp.agent [None req-69e46714-0444-41f2-9414-3e8718b7bee0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:02:28 localhost nova_compute[280321]: 2026-02-23 10:02:28.435 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:28 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 05:02:28 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 05:02:28 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 05:02:28 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:02:28 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 05:02:28 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 497df702-a5ec-4e4e-a055-b862243d998b (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:02:28 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 497df702-a5ec-4e4e-a055-b862243d998b (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:02:28 localhost ceph-mgr[285904]: [progress INFO root] Completed event 497df702-a5ec-4e4e-a055-b862243d998b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 05:02:28 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 05:02:28 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 05:02:28 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 23 05:02:28 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:28 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:28 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:02:28 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:02:28 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:02:28 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bedacb3b-517e-43b1-b025-790f9bc892fc", "auth_id": "tempest-cephx-id-380228807", "tenant_id": "afc38bb20ffe4287899bc080a5fd2741", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:02:28 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-380228807, format:json, prefix:fs subvolume authorize, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, tenant_id:afc38bb20ffe4287899bc080a5fd2741, vol_name:cephfs) < "" Feb 23 05:02:28 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-380228807", "format": "json"} v 0) Feb 23 05:02:28 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-380228807", "format": "json"} : dispatch Feb 23 05:02:28 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID tempest-cephx-id-380228807 with tenant afc38bb20ffe4287899bc080a5fd2741 Feb 23 05:02:28 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-380228807", "caps": ["mds", "allow rw path=/volumes/_nogroup/bedacb3b-517e-43b1-b025-790f9bc892fc/10c0e9c9-eacd-4b8a-ae8b-8483042ecb15", "osd", "allow rw pool=manila_data namespace=fsvolumens_bedacb3b-517e-43b1-b025-790f9bc892fc", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:02:28 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-380228807", "caps": ["mds", "allow rw path=/volumes/_nogroup/bedacb3b-517e-43b1-b025-790f9bc892fc/10c0e9c9-eacd-4b8a-ae8b-8483042ecb15", "osd", "allow rw pool=manila_data namespace=fsvolumens_bedacb3b-517e-43b1-b025-790f9bc892fc", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:28 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-380228807, format:json, prefix:fs subvolume authorize, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, tenant_id:afc38bb20ffe4287899bc080a5fd2741, vol_name:cephfs) < "" Feb 23 05:02:28 localhost nova_compute[280321]: 2026-02-23 10:02:28.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:02:28 localhost nova_compute[280321]: 2026-02-23 10:02:28.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:02:28 localhost nova_compute[280321]: 2026-02-23 10:02:28.911 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:02:28 localhost nova_compute[280321]: 2026-02-23 10:02:28.912 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:02:28 localhost nova_compute[280321]: 2026-02-23 10:02:28.912 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:02:28 localhost nova_compute[280321]: 2026-02-23 10:02:28.912 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 05:02:28 localhost nova_compute[280321]: 2026-02-23 10:02:28.912 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:02:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a63f65b-263e-4f0a-be43-9aace02f6e45", "format": "json"}]: dispatch Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:29 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:29.038+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a63f65b-263e-4f0a-be43-9aace02f6e45' of type subvolume Feb 23 05:02:29 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a63f65b-263e-4f0a-be43-9aace02f6e45' of type subvolume Feb 23 05:02:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a63f65b-263e-4f0a-be43-9aace02f6e45", "force": true, "format": "json"}]: dispatch Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, vol_name:cephfs) < "" Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7a63f65b-263e-4f0a-be43-9aace02f6e45'' moved to trashcan Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a63f65b-263e-4f0a-be43-9aace02f6e45, vol_name:cephfs) < "" Feb 23 05:02:29 localhost systemd[1]: tmp-crun.ImkimA.mount: Deactivated successfully. Feb 23 05:02:29 localhost systemd[1]: var-lib-containers-storage-overlay-1124c2de9baed349ff45d7c72a07adc06d21ce8c999d1e7ddf9ccadb499254a0-merged.mount: Deactivated successfully. Feb 23 05:02:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3405d1972775e9977be1e81c140f76299d8fa4001023ee40b9727e654d81bec3-userdata-shm.mount: Deactivated successfully. Feb 23 05:02:29 localhost systemd[1]: run-netns-qdhcp\x2ddf180ca2\x2d6eb1\x2d491a\x2d8509\x2d1e0aaeaaf97d.mount: Deactivated successfully. Feb 23 05:02:29 localhost nova_compute[280321]: 2026-02-23 10:02:29.174 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:29 localhost nova_compute[280321]: 2026-02-23 10:02:29.270 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 196 MiB data, 993 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 101 KiB/s wr, 27 op/s Feb 23 05:02:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:02:29 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1226954335' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:02:29 localhost nova_compute[280321]: 2026-02-23 10:02:29.377 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:02:29 localhost nova_compute[280321]: 2026-02-23 10:02:29.541 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 05:02:29 localhost nova_compute[280321]: 2026-02-23 10:02:29.545 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11600MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 05:02:29 localhost nova_compute[280321]: 2026-02-23 10:02:29.545 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:02:29 localhost nova_compute[280321]: 2026-02-23 10:02:29.546 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:02:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bedacb3b-517e-43b1-b025-790f9bc892fc", "auth_id": "tempest-cephx-id-380228807", "format": "json"}]: dispatch Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-380228807, format:json, prefix:fs subvolume deauthorize, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, vol_name:cephfs) < "" Feb 23 05:02:29 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-380228807", "format": "json"} : dispatch Feb 23 05:02:29 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-380228807", "caps": ["mds", "allow rw path=/volumes/_nogroup/bedacb3b-517e-43b1-b025-790f9bc892fc/10c0e9c9-eacd-4b8a-ae8b-8483042ecb15", "osd", "allow rw pool=manila_data namespace=fsvolumens_bedacb3b-517e-43b1-b025-790f9bc892fc", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:29 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-380228807", "caps": ["mds", "allow rw path=/volumes/_nogroup/bedacb3b-517e-43b1-b025-790f9bc892fc/10c0e9c9-eacd-4b8a-ae8b-8483042ecb15", "osd", "allow rw pool=manila_data namespace=fsvolumens_bedacb3b-517e-43b1-b025-790f9bc892fc", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:29 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-380228807", "caps": ["mds", "allow rw path=/volumes/_nogroup/bedacb3b-517e-43b1-b025-790f9bc892fc/10c0e9c9-eacd-4b8a-ae8b-8483042ecb15", "osd", "allow rw pool=manila_data namespace=fsvolumens_bedacb3b-517e-43b1-b025-790f9bc892fc", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:02:29 localhost nova_compute[280321]: 2026-02-23 10:02:29.621 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 05:02:29 localhost nova_compute[280321]: 2026-02-23 10:02:29.622 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 05:02:29 localhost nova_compute[280321]: 2026-02-23 10:02:29.644 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:02:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-380228807", "format": "json"} v 0) Feb 23 05:02:29 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-380228807", "format": "json"} : dispatch Feb 23 05:02:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-380228807"} v 0) Feb 23 05:02:29 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-380228807"} : dispatch Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-380228807, format:json, prefix:fs subvolume deauthorize, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, vol_name:cephfs) < "" Feb 23 05:02:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bedacb3b-517e-43b1-b025-790f9bc892fc", "auth_id": "tempest-cephx-id-380228807", "format": "json"}]: dispatch Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-380228807, format:json, prefix:fs subvolume evict, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, vol_name:cephfs) < "" Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-380228807, client_metadata.root=/volumes/_nogroup/bedacb3b-517e-43b1-b025-790f9bc892fc/10c0e9c9-eacd-4b8a-ae8b-8483042ecb15 Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-380228807, format:json, prefix:fs subvolume evict, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, vol_name:cephfs) < "" Feb 23 05:02:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bedacb3b-517e-43b1-b025-790f9bc892fc", "format": "json"}]: dispatch Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bedacb3b-517e-43b1-b025-790f9bc892fc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bedacb3b-517e-43b1-b025-790f9bc892fc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:29 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:29.858+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bedacb3b-517e-43b1-b025-790f9bc892fc' of type subvolume Feb 23 05:02:29 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bedacb3b-517e-43b1-b025-790f9bc892fc' of type subvolume Feb 23 05:02:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bedacb3b-517e-43b1-b025-790f9bc892fc", "force": true, "format": "json"}]: dispatch Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, vol_name:cephfs) < "" Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bedacb3b-517e-43b1-b025-790f9bc892fc'' moved to trashcan Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bedacb3b-517e-43b1-b025-790f9bc892fc, vol_name:cephfs) < "" Feb 23 05:02:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:02:30 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/738794719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:02:30 localhost nova_compute[280321]: 2026-02-23 10:02:30.092 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:02:30 localhost nova_compute[280321]: 2026-02-23 10:02:30.100 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 05:02:30 localhost nova_compute[280321]: 2026-02-23 10:02:30.123 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 05:02:30 localhost nova_compute[280321]: 2026-02-23 10:02:30.125 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 05:02:30 localhost nova_compute[280321]: 2026-02-23 10:02:30.126 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:02:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:30 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 05:02:30 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-380228807", "format": "json"} : dispatch Feb 23 05:02:30 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-380228807"} : dispatch Feb 23 05:02:30 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-380228807"} : dispatch Feb 23 05:02:30 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-380228807"}]': finished Feb 23 05:02:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 05:02:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "auth_id": "eve48", "tenant_id": "f47d5caa97d244edb5aef31a3870507a", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:02:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, tenant_id:f47d5caa97d244edb5aef31a3870507a, vol_name:cephfs) < "" Feb 23 05:02:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Feb 23 05:02:30 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 23 05:02:30 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID eve48 with tenant f47d5caa97d244edb5aef31a3870507a Feb 23 05:02:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:02:30 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, tenant_id:f47d5caa97d244edb5aef31a3870507a, vol_name:cephfs) < "" Feb 23 05:02:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 196 MiB data, 998 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 131 KiB/s wr, 56 op/s Feb 23 05:02:31 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:02:31 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 23 05:02:31 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:31 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:31 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:02:31 localhost openstack_network_exporter[243519]: ERROR 10:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:02:31 localhost openstack_network_exporter[243519]: Feb 23 05:02:31 localhost openstack_network_exporter[243519]: ERROR 10:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:02:31 localhost openstack_network_exporter[243519]: Feb 23 05:02:32 localhost nova_compute[280321]: 2026-02-23 10:02:32.126 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:02:32 localhost nova_compute[280321]: 2026-02-23 10:02:32.127 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:02:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "auth_id": "admin", "format": "json"}]: dispatch Feb 23 05:02:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:02:32 localhost ceph-mgr[285904]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist Feb 23 05:02:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:02:32 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:32.348+0000 7fc3ba4ad640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Feb 23 05:02:32 localhost ceph-mgr[285904]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Feb 23 05:02:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "format": "json"}]: dispatch Feb 23 05:02:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:32 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:32.490+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4' of type subvolume Feb 23 05:02:32 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4' of type subvolume Feb 23 05:02:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4", "force": true, "format": "json"}]: dispatch Feb 23 05:02:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:02:32 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4'' moved to trashcan Feb 23 05:02:32 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ce93cb3d-9c0e-41e4-ad7d-9451b8b5d7b4, vol_name:cephfs) < "" Feb 23 05:02:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 196 MiB data, 999 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 132 KiB/s wr, 60 op/s Feb 23 05:02:33 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:33.498 263679 INFO neutron.agent.linux.ip_lib [None req-6c988ba2-a741-4700-9323-53fccd881712 - - - - - -] Device tape5089462-12 cannot be used as it has no MAC address#033[00m Feb 23 05:02:33 localhost nova_compute[280321]: 2026-02-23 10:02:33.562 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:33 localhost kernel: device tape5089462-12 entered promiscuous mode Feb 23 05:02:33 localhost NetworkManager[5987]: [1771840953.5725] manager: (tape5089462-12): new Generic device (/org/freedesktop/NetworkManager/Devices/67) Feb 23 05:02:33 localhost ovn_controller[155966]: 2026-02-23T10:02:33Z|00374|binding|INFO|Claiming lport e5089462-1226-410d-a4c0-f70ef1f5b9b9 for this chassis. Feb 23 05:02:33 localhost ovn_controller[155966]: 2026-02-23T10:02:33Z|00375|binding|INFO|e5089462-1226-410d-a4c0-f70ef1f5b9b9: Claiming unknown Feb 23 05:02:33 localhost nova_compute[280321]: 2026-02-23 10:02:33.573 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:33 localhost systemd-udevd[324071]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:02:33 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:33.586 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68a48b471ed84048aeb651374fff5111', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5fff032-30f3-4a56-b1f6-f92733fd5f68, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e5089462-1226-410d-a4c0-f70ef1f5b9b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:02:33 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:33.588 161842 INFO neutron.agent.ovn.metadata.agent [-] Port e5089462-1226-410d-a4c0-f70ef1f5b9b9 in datapath d25b4a94-8b22-46ea-b838-1e6a8af6f7b4 bound to our chassis#033[00m Feb 23 05:02:33 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:33.589 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d25b4a94-8b22-46ea-b838-1e6a8af6f7b4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:02:33 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:33.590 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[0537a2a7-34fc-465d-a917-f332f71a7a24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:02:33 localhost ovn_controller[155966]: 2026-02-23T10:02:33Z|00376|binding|INFO|Setting lport e5089462-1226-410d-a4c0-f70ef1f5b9b9 ovn-installed in OVS Feb 23 05:02:33 localhost ovn_controller[155966]: 2026-02-23T10:02:33Z|00377|binding|INFO|Setting lport e5089462-1226-410d-a4c0-f70ef1f5b9b9 up in Southbound Feb 23 05:02:33 localhost journal[229268]: ethtool ioctl error on tape5089462-12: No such device Feb 23 05:02:33 localhost nova_compute[280321]: 2026-02-23 10:02:33.608 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:33 localhost journal[229268]: ethtool ioctl error on tape5089462-12: No such device Feb 23 05:02:33 localhost journal[229268]: ethtool ioctl error on tape5089462-12: No such device Feb 23 05:02:33 localhost journal[229268]: ethtool ioctl error on tape5089462-12: No such device Feb 23 05:02:33 localhost journal[229268]: ethtool ioctl error on tape5089462-12: No such device Feb 23 05:02:33 localhost journal[229268]: ethtool ioctl error on tape5089462-12: No such device Feb 23 05:02:33 localhost nova_compute[280321]: 2026-02-23 10:02:33.638 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:33 localhost journal[229268]: ethtool ioctl error on tape5089462-12: No such device Feb 23 05:02:33 localhost journal[229268]: ethtool ioctl error on tape5089462-12: No such device Feb 23 05:02:33 localhost nova_compute[280321]: 2026-02-23 10:02:33.665 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:33 localhost nova_compute[280321]: 2026-02-23 10:02:33.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:02:34 localhost nova_compute[280321]: 2026-02-23 10:02:34.175 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:34 localhost nova_compute[280321]: 2026-02-23 10:02:34.272 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:34 localhost podman[324142]: Feb 23 05:02:34 localhost podman[324142]: 2026-02-23 10:02:34.46761825 +0000 UTC m=+0.085106371 container create 260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:02:34 localhost systemd[1]: Started libpod-conmon-260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a.scope. Feb 23 05:02:34 localhost systemd[1]: tmp-crun.887cIj.mount: Deactivated successfully. Feb 23 05:02:34 localhost podman[324142]: 2026-02-23 10:02:34.426134023 +0000 UTC m=+0.043622174 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:02:34 localhost systemd[1]: Started libcrun container. Feb 23 05:02:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83be46e7c16057073303c95f6c9d564bd349dda94dbf7ac662f1e6aaf9489e0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:02:34 localhost podman[324142]: 2026-02-23 10:02:34.554941476 +0000 UTC m=+0.172429587 container init 260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:02:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "auth_id": "eve48", "format": "json"}]: dispatch Feb 23 05:02:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:34 localhost dnsmasq[324160]: started, version 2.85 cachesize 150 Feb 23 05:02:34 localhost dnsmasq[324160]: DNS service limited to local subnets Feb 23 05:02:34 localhost dnsmasq[324160]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:02:34 localhost dnsmasq[324160]: warning: no upstream servers configured Feb 23 05:02:34 localhost dnsmasq-dhcp[324160]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Feb 23 05:02:34 localhost podman[324142]: 2026-02-23 10:02:34.571824312 +0000 UTC m=+0.189312433 container start 260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:02:34 localhost dnsmasq[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/addn_hosts - 0 addresses Feb 23 05:02:34 localhost dnsmasq-dhcp[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/host Feb 23 05:02:34 localhost dnsmasq-dhcp[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/opts Feb 23 05:02:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Feb 23 05:02:34 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 23 05:02:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) Feb 23 05:02:34 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Feb 23 05:02:34 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:34.634 263679 INFO neutron.agent.dhcp.agent [None req-6c988ba2-a741-4700-9323-53fccd881712 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:02:33Z, description=, device_id=2025587f-5897-4d24-b841-a40c65f335b2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b89fbe90-566e-4542-8897-6126c7e40a22, ip_allocation=immediate, mac_address=fa:16:3e:fd:f2:fa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:02:31Z, description=, dns_domain=, id=d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1047023247, port_security_enabled=True, project_id=68a48b471ed84048aeb651374fff5111, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57189, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3098, status=ACTIVE, subnets=['7abaa631-efa5-4362-99b7-e81347dc52c1'], tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:32Z, vlan_transparent=None, network_id=d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, port_security_enabled=False, project_id=68a48b471ed84048aeb651374fff5111, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3103, status=DOWN, tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:33Z on network d25b4a94-8b22-46ea-b838-1e6a8af6f7b4#033[00m Feb 23 05:02:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "auth_id": "eve48", "format": "json"}]: dispatch Feb 23 05:02:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:34 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a Feb 23 05:02:34 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:02:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:34 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:34.743 263679 INFO neutron.agent.dhcp.agent [None req-be7d43f6-0827-4b5e-aed6-7dc69fc9e6ef - - - - - -] DHCP configuration for ports {'3996c355-d973-427d-948a-8c3f3dec18a7'} is completed#033[00m Feb 23 05:02:34 localhost nova_compute[280321]: 2026-02-23 10:02:34.769 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:34 localhost dnsmasq[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/addn_hosts - 1 addresses Feb 23 05:02:34 localhost dnsmasq-dhcp[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/host Feb 23 05:02:34 localhost dnsmasq-dhcp[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/opts Feb 23 05:02:34 localhost podman[324180]: 2026-02-23 10:02:34.866981514 +0000 UTC m=+0.056423574 container kill 260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:02:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:35.008 263679 INFO neutron.agent.dhcp.agent [None req-6c988ba2-a741-4700-9323-53fccd881712 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:02:33Z, description=, device_id=2025587f-5897-4d24-b841-a40c65f335b2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b89fbe90-566e-4542-8897-6126c7e40a22, ip_allocation=immediate, mac_address=fa:16:3e:fd:f2:fa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:02:31Z, description=, dns_domain=, id=d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1047023247, port_security_enabled=True, project_id=68a48b471ed84048aeb651374fff5111, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57189, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3098, status=ACTIVE, subnets=['7abaa631-efa5-4362-99b7-e81347dc52c1'], tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:32Z, vlan_transparent=None, network_id=d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, port_security_enabled=False, project_id=68a48b471ed84048aeb651374fff5111, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3103, status=DOWN, tags=[], tenant_id=68a48b471ed84048aeb651374fff5111, updated_at=2026-02-23T10:02:33Z on network d25b4a94-8b22-46ea-b838-1e6a8af6f7b4#033[00m Feb 23 05:02:35 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 23 05:02:35 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Feb 23 05:02:35 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Feb 23 05:02:35 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Feb 23 05:02:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:35.101 263679 INFO neutron.agent.dhcp.agent [None req-a4d3ad19-fd3c-4e45-b61f-f328adcc3f57 - - - - - -] DHCP configuration for ports {'b89fbe90-566e-4542-8897-6126c7e40a22'} is completed#033[00m Feb 23 05:02:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:02:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:02:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:02:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:02:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:02:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:02:35 localhost dnsmasq[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/addn_hosts - 1 addresses Feb 23 05:02:35 localhost dnsmasq-dhcp[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/host Feb 23 05:02:35 localhost dnsmasq-dhcp[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/opts Feb 23 05:02:35 localhost podman[324218]: 2026-02-23 10:02:35.202110358 +0000 UTC m=+0.056082463 container kill 260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:02:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v481: 177 pgs: 177 active+clean; 196 MiB data, 999 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 132 KiB/s wr, 60 op/s Feb 23 05:02:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:35.452 263679 INFO neutron.agent.dhcp.agent [None req-10d83914-d0ac-4d68-85f9-070642b77dd9 - - - - - -] DHCP configuration for ports {'b89fbe90-566e-4542-8897-6126c7e40a22'} is completed#033[00m Feb 23 05:02:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 197 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 100 KiB/s wr, 62 op/s Feb 23 05:02:37 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:37.806 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:02:37 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:37.808 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 05:02:37 localhost nova_compute[280321]: 2026-02-23 10:02:37.839 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:02:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:02:38 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "auth_id": "eve47", "tenant_id": "f47d5caa97d244edb5aef31a3870507a", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:02:38 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, tenant_id:f47d5caa97d244edb5aef31a3870507a, vol_name:cephfs) < "" Feb 23 05:02:38 localhost podman[324240]: 2026-02-23 10:02:38.018569252 +0000 UTC m=+0.082161059 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, version=9.7, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 23 05:02:38 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Feb 23 05:02:38 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 23 05:02:38 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID eve47 with tenant f47d5caa97d244edb5aef31a3870507a Feb 23 05:02:38 localhost podman[324240]: 2026-02-23 10:02:38.061894486 +0000 UTC m=+0.125486283 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, distribution-scope=public, architecture=x86_64, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers) Feb 23 05:02:38 localhost podman[324239]: 2026-02-23 10:02:38.07678654 +0000 UTC m=+0.141769320 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 05:02:38 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:02:38 localhost podman[324239]: 2026-02-23 10:02:38.08726893 +0000 UTC m=+0.152251670 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 05:02:38 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:02:38 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:38 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:02:38 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, tenant_id:f47d5caa97d244edb5aef31a3870507a, vol_name:cephfs) < "" Feb 23 05:02:38 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 23 05:02:38 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:38 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:38 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff275bf0-7ab0-4200-9dfe-d972931f7856", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:02:39 localhost nova_compute[280321]: 2026-02-23 10:02:39.200 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:39 localhost nova_compute[280321]: 2026-02-23 10:02:39.275 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 177 active+clean; 197 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 85 KiB/s wr, 52 op/s Feb 23 05:02:39 localhost nova_compute[280321]: 2026-02-23 10:02:39.535 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:40 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:40.474 263679 INFO neutron.agent.linux.ip_lib [None req-829e5140-5dcc-49f5-95bf-09e054ac3cad - - - - - -] Device tap04c735a3-87 cannot be used as it has no MAC address#033[00m Feb 23 05:02:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:40 localhost nova_compute[280321]: 2026-02-23 10:02:40.554 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:40 localhost kernel: device tap04c735a3-87 entered promiscuous mode Feb 23 05:02:40 localhost NetworkManager[5987]: [1771840960.5641] manager: (tap04c735a3-87): new Generic device (/org/freedesktop/NetworkManager/Devices/68) Feb 23 05:02:40 localhost ovn_controller[155966]: 2026-02-23T10:02:40Z|00378|binding|INFO|Claiming lport 04c735a3-8797-4246-ac4c-855911861396 for this chassis. Feb 23 05:02:40 localhost ovn_controller[155966]: 2026-02-23T10:02:40Z|00379|binding|INFO|04c735a3-8797-4246-ac4c-855911861396: Claiming unknown Feb 23 05:02:40 localhost nova_compute[280321]: 2026-02-23 10:02:40.566 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:40 localhost systemd-udevd[324294]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:02:40 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:40.575 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-78832f6d-2c11-4726-a3d2-da07de345e9e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78832f6d-2c11-4726-a3d2-da07de345e9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8187f536374b481bb4f9ac743cf0cac8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d51f4bb7-50c9-45d9-9a6a-3a1080b73b60, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=04c735a3-8797-4246-ac4c-855911861396) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:02:40 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:40.577 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 04c735a3-8797-4246-ac4c-855911861396 in datapath 78832f6d-2c11-4726-a3d2-da07de345e9e bound to our chassis#033[00m Feb 23 05:02:40 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:40.580 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 78832f6d-2c11-4726-a3d2-da07de345e9e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:02:40 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:40.581 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[21bfd1d3-b019-4478-9c46-5869c85b3ad9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:02:40 localhost journal[229268]: ethtool ioctl error on tap04c735a3-87: No such device Feb 23 05:02:40 localhost journal[229268]: ethtool ioctl error on tap04c735a3-87: No such device Feb 23 05:02:40 localhost ovn_controller[155966]: 2026-02-23T10:02:40Z|00380|binding|INFO|Setting lport 04c735a3-8797-4246-ac4c-855911861396 ovn-installed in OVS Feb 23 05:02:40 localhost ovn_controller[155966]: 2026-02-23T10:02:40Z|00381|binding|INFO|Setting lport 04c735a3-8797-4246-ac4c-855911861396 up in Southbound Feb 23 05:02:40 localhost nova_compute[280321]: 2026-02-23 10:02:40.604 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:40 localhost nova_compute[280321]: 2026-02-23 10:02:40.606 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:40 localhost journal[229268]: ethtool ioctl error on tap04c735a3-87: No such device Feb 23 05:02:40 localhost journal[229268]: ethtool ioctl error on tap04c735a3-87: No such device Feb 23 05:02:40 localhost journal[229268]: ethtool ioctl error on tap04c735a3-87: No such device Feb 23 05:02:40 localhost journal[229268]: ethtool ioctl error on tap04c735a3-87: No such device Feb 23 05:02:40 localhost journal[229268]: ethtool ioctl error on tap04c735a3-87: No such device Feb 23 05:02:40 localhost journal[229268]: ethtool ioctl error on tap04c735a3-87: No such device Feb 23 05:02:40 localhost nova_compute[280321]: 2026-02-23 10:02:40.645 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:40 localhost nova_compute[280321]: 2026-02-23 10:02:40.676 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 197 MiB data, 1001 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 96 KiB/s wr, 53 op/s Feb 23 05:02:41 localhost podman[324365]: Feb 23 05:02:41 localhost podman[324365]: 2026-02-23 10:02:41.558261045 +0000 UTC m=+0.091200531 container create 2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78832f6d-2c11-4726-a3d2-da07de345e9e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:02:41 localhost systemd[1]: Started libpod-conmon-2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923.scope. Feb 23 05:02:41 localhost systemd[1]: tmp-crun.IhgkjV.mount: Deactivated successfully. Feb 23 05:02:41 localhost systemd[1]: Started libcrun container. Feb 23 05:02:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c53c397080025768062632091b6c32c3c9e36e51c756822bb5717443a3f3be3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:02:41 localhost podman[324365]: 2026-02-23 10:02:41.514828036 +0000 UTC m=+0.047767542 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:02:41 localhost podman[324365]: 2026-02-23 10:02:41.624572442 +0000 UTC m=+0.157511928 container init 2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78832f6d-2c11-4726-a3d2-da07de345e9e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:02:41 localhost podman[324365]: 2026-02-23 10:02:41.633920839 +0000 UTC m=+0.166860325 container start 2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78832f6d-2c11-4726-a3d2-da07de345e9e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 05:02:41 localhost dnsmasq[324383]: started, version 2.85 cachesize 150 Feb 23 05:02:41 localhost dnsmasq[324383]: DNS service limited to local subnets Feb 23 05:02:41 localhost dnsmasq[324383]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:02:41 localhost dnsmasq[324383]: warning: no upstream servers configured Feb 23 05:02:41 localhost dnsmasq-dhcp[324383]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:02:41 localhost dnsmasq[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/addn_hosts - 0 addresses Feb 23 05:02:41 localhost dnsmasq-dhcp[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/host Feb 23 05:02:41 localhost dnsmasq-dhcp[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/opts Feb 23 05:02:41 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "auth_id": "eve47", "format": "json"}]: dispatch Feb 23 05:02:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Feb 23 05:02:41 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 23 05:02:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) Feb 23 05:02:41 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Feb 23 05:02:41 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:41.874 263679 INFO neutron.agent.dhcp.agent [None req-8e3aa807-62bc-4297-9810-865436f47366 - - - - - -] DHCP configuration for ports {'81bfb43c-81f8-4b01-a818-2c7583f232fa'} is completed#033[00m Feb 23 05:02:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:41 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "auth_id": "eve47", "format": "json"}]: dispatch Feb 23 05:02:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:41 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a Feb 23 05:02:41 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:02:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:02:42 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 23 05:02:42 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Feb 23 05:02:42 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Feb 23 05:02:42 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Feb 23 05:02:42 localhost podman[324385]: 2026-02-23 10:02:42.522752172 +0000 UTC m=+0.092746837 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216) Feb 23 05:02:42 localhost podman[324385]: 2026-02-23 10:02:42.592955529 +0000 UTC m=+0.162950234 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:02:42 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:02:42 localhost podman[241086]: time="2026-02-23T10:02:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:02:42 localhost podman[241086]: @ - - [23/Feb/2026:10:02:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157713 "" "Go-http-client/1.1" Feb 23 05:02:42 localhost podman[241086]: @ - - [23/Feb/2026:10:02:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18760 "" "Go-http-client/1.1" Feb 23 05:02:43 localhost nova_compute[280321]: 2026-02-23 10:02:43.170 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 197 MiB data, 1001 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 44 KiB/s wr, 23 op/s Feb 23 05:02:43 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:43.805 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:02:43Z, description=, device_id=0e7dc8be-fc71-4ddd-9386-3285468bc795, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a29db309-41f9-4ee0-bc2f-702da4b4059f, ip_allocation=immediate, mac_address=fa:16:3e:6e:ec:9c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:02:38Z, description=, dns_domain=, id=78832f6d-2c11-4726-a3d2-da07de345e9e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesSnapshotTestJSON-621917800-network, port_security_enabled=True, project_id=8187f536374b481bb4f9ac743cf0cac8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=37684, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3160, status=ACTIVE, subnets=['692bea5d-0ff6-4bb5-90ac-5a0cef7fdd73'], tags=[], tenant_id=8187f536374b481bb4f9ac743cf0cac8, updated_at=2026-02-23T10:02:39Z, vlan_transparent=None, network_id=78832f6d-2c11-4726-a3d2-da07de345e9e, port_security_enabled=False, project_id=8187f536374b481bb4f9ac743cf0cac8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3213, status=DOWN, tags=[], tenant_id=8187f536374b481bb4f9ac743cf0cac8, updated_at=2026-02-23T10:02:43Z on network 78832f6d-2c11-4726-a3d2-da07de345e9e#033[00m Feb 23 05:02:43 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:43.812 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 05:02:43 localhost systemd[1]: tmp-crun.NhotVL.mount: Deactivated successfully. Feb 23 05:02:43 localhost dnsmasq[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/addn_hosts - 1 addresses Feb 23 05:02:43 localhost dnsmasq-dhcp[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/host Feb 23 05:02:43 localhost dnsmasq-dhcp[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/opts Feb 23 05:02:43 localhost podman[324427]: 2026-02-23 10:02:43.995887397 +0000 UTC m=+0.048050771 container kill 2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78832f6d-2c11-4726-a3d2-da07de345e9e, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:02:44 localhost nova_compute[280321]: 2026-02-23 10:02:44.241 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:44 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:44.246 263679 INFO neutron.agent.dhcp.agent [None req-d40ad377-64fb-49a6-975d-c4b031b9e3cf - - - - - -] DHCP configuration for ports {'a29db309-41f9-4ee0-bc2f-702da4b4059f'} is completed#033[00m Feb 23 05:02:44 localhost nova_compute[280321]: 2026-02-23 10:02:44.276 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:45 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:45.092 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:02:43Z, description=, device_id=0e7dc8be-fc71-4ddd-9386-3285468bc795, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a29db309-41f9-4ee0-bc2f-702da4b4059f, ip_allocation=immediate, mac_address=fa:16:3e:6e:ec:9c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:02:38Z, description=, dns_domain=, id=78832f6d-2c11-4726-a3d2-da07de345e9e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesSnapshotTestJSON-621917800-network, port_security_enabled=True, project_id=8187f536374b481bb4f9ac743cf0cac8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=37684, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3160, status=ACTIVE, subnets=['692bea5d-0ff6-4bb5-90ac-5a0cef7fdd73'], tags=[], tenant_id=8187f536374b481bb4f9ac743cf0cac8, updated_at=2026-02-23T10:02:39Z, vlan_transparent=None, network_id=78832f6d-2c11-4726-a3d2-da07de345e9e, port_security_enabled=False, project_id=8187f536374b481bb4f9ac743cf0cac8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3213, status=DOWN, tags=[], tenant_id=8187f536374b481bb4f9ac743cf0cac8, updated_at=2026-02-23T10:02:43Z on network 78832f6d-2c11-4726-a3d2-da07de345e9e#033[00m Feb 23 05:02:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 197 MiB data, 1001 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 43 KiB/s wr, 9 op/s Feb 23 05:02:45 localhost dnsmasq[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/addn_hosts - 1 addresses Feb 23 05:02:45 localhost dnsmasq-dhcp[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/host Feb 23 05:02:45 localhost dnsmasq-dhcp[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/opts Feb 23 05:02:45 localhost podman[324465]: 2026-02-23 10:02:45.316656701 +0000 UTC m=+0.063417810 container kill 2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78832f6d-2c11-4726-a3d2-da07de345e9e, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:02:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:45 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:45.605 263679 INFO neutron.agent.dhcp.agent [None req-ff82fde9-c788-4503-ab56-eb69aed7f34b - - - - - -] DHCP configuration for ports {'a29db309-41f9-4ee0-bc2f-702da4b4059f'} is completed#033[00m Feb 23 05:02:46 localhost ovn_controller[155966]: 2026-02-23T10:02:46Z|00382|ovn_bfd|INFO|Enabled BFD on interface ovn-5b0126-0 Feb 23 05:02:46 localhost ovn_controller[155966]: 2026-02-23T10:02:46Z|00383|ovn_bfd|INFO|Enabled BFD on interface ovn-585d62-0 Feb 23 05:02:46 localhost ovn_controller[155966]: 2026-02-23T10:02:46Z|00384|ovn_bfd|INFO|Enabled BFD on interface ovn-b9c72d-0 Feb 23 05:02:46 localhost nova_compute[280321]: 2026-02-23 10:02:46.108 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:46 localhost nova_compute[280321]: 2026-02-23 10:02:46.126 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:46 localhost nova_compute[280321]: 2026-02-23 10:02:46.141 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:46 localhost nova_compute[280321]: 2026-02-23 10:02:46.146 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:46 localhost nova_compute[280321]: 2026-02-23 10:02:46.181 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:46 localhost nova_compute[280321]: 2026-02-23 10:02:46.202 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:46 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "auth_id": "eve49", "format": "json"}]: dispatch Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:46 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Feb 23 05:02:46 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 23 05:02:46 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) Feb 23 05:02:46 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:46 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "auth_id": "eve49", "format": "json"}]: dispatch Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856/b70c063c-69eb-4a53-8ff2-f992c2e4cf5a Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:46 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "format": "json"}]: dispatch Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:46 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:46.610+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff275bf0-7ab0-4200-9dfe-d972931f7856' of type subvolume Feb 23 05:02:46 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff275bf0-7ab0-4200-9dfe-d972931f7856' of type subvolume Feb 23 05:02:46 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff275bf0-7ab0-4200-9dfe-d972931f7856", "force": true, "format": "json"}]: dispatch Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff275bf0-7ab0-4200-9dfe-d972931f7856'' moved to trashcan Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff275bf0-7ab0-4200-9dfe-d972931f7856, vol_name:cephfs) < "" Feb 23 05:02:46 localhost nova_compute[280321]: 2026-02-23 10:02:46.972 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:46 localhost nova_compute[280321]: 2026-02-23 10:02:46.998 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:47 localhost nova_compute[280321]: 2026-02-23 10:02:47.076 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 197 MiB data, 1001 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 56 KiB/s wr, 11 op/s Feb 23 05:02:47 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 23 05:02:47 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Feb 23 05:02:47 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Feb 23 05:02:47 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Feb 23 05:02:47 localhost nova_compute[280321]: 2026-02-23 10:02:47.962 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:48.317 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:02:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:48.318 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:02:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:48.318 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:02:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:02:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:02:48 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/.meta.tmp' Feb 23 05:02:48 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/.meta.tmp' to config b'/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/.meta' Feb 23 05:02:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:02:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "format": "json"}]: dispatch Feb 23 05:02:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:02:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:02:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e220 e220: 6 total, 6 up, 6 in Feb 23 05:02:49 localhost nova_compute[280321]: 2026-02-23 10:02:49.276 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 197 MiB data, 1001 MiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 32 KiB/s wr, 4 op/s Feb 23 05:02:50 localhost nova_compute[280321]: 2026-02-23 10:02:50.159 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "20902d73-7434-438f-9b7e-d3fbd0c8aa20", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta.tmp' Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta.tmp' to config b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta' Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:02:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "20902d73-7434-438f-9b7e-d3fbd0c8aa20", "format": "json"}]: dispatch Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:02:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:02:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11, vol_name:cephfs) < "" Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11/.meta.tmp' Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11/.meta.tmp' to config b'/volumes/_nogroup/9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11/.meta' Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11, vol_name:cephfs) < "" Feb 23 05:02:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11", "format": "json"}]: dispatch Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11, vol_name:cephfs) < "" Feb 23 05:02:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11, vol_name:cephfs) < "" Feb 23 05:02:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:02:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:02:51 localhost systemd[1]: tmp-crun.pGa8EU.mount: Deactivated successfully. Feb 23 05:02:51 localhost podman[324504]: 2026-02-23 10:02:51.009594014 +0000 UTC m=+0.080914365 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 05:02:51 localhost dnsmasq[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/addn_hosts - 0 addresses Feb 23 05:02:51 localhost systemd[1]: tmp-crun.EG8CSn.mount: Deactivated successfully. Feb 23 05:02:51 localhost dnsmasq-dhcp[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/host Feb 23 05:02:51 localhost podman[324524]: 2026-02-23 10:02:51.028131361 +0000 UTC m=+0.055880280 container kill 260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:02:51 localhost dnsmasq-dhcp[324160]: read /var/lib/neutron/dhcp/d25b4a94-8b22-46ea-b838-1e6a8af6f7b4/opts Feb 23 05:02:51 localhost podman[324504]: 2026-02-23 10:02:51.09349385 +0000 UTC m=+0.164814231 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent) Feb 23 05:02:51 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:02:51 localhost podman[324505]: 2026-02-23 10:02:51.066693941 +0000 UTC m=+0.132178294 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 23 05:02:51 localhost podman[324505]: 2026-02-23 10:02:51.148885104 +0000 UTC m=+0.214369457 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:02:51 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:02:51 localhost ovn_controller[155966]: 2026-02-23T10:02:51Z|00385|binding|INFO|Releasing lport e5089462-1226-410d-a4c0-f70ef1f5b9b9 from this chassis (sb_readonly=0) Feb 23 05:02:51 localhost kernel: device tape5089462-12 left promiscuous mode Feb 23 05:02:51 localhost nova_compute[280321]: 2026-02-23 10:02:51.211 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:51 localhost ovn_controller[155966]: 2026-02-23T10:02:51Z|00386|binding|INFO|Setting lport e5089462-1226-410d-a4c0-f70ef1f5b9b9 down in Southbound Feb 23 05:02:51 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:51.222 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68a48b471ed84048aeb651374fff5111', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5fff032-30f3-4a56-b1f6-f92733fd5f68, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e5089462-1226-410d-a4c0-f70ef1f5b9b9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:02:51 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:51.224 161842 INFO neutron.agent.ovn.metadata.agent [-] Port e5089462-1226-410d-a4c0-f70ef1f5b9b9 in datapath d25b4a94-8b22-46ea-b838-1e6a8af6f7b4 unbound from our chassis#033[00m Feb 23 05:02:51 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:51.226 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d25b4a94-8b22-46ea-b838-1e6a8af6f7b4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:02:51 localhost ovn_metadata_agent[161837]: 2026-02-23 10:02:51.227 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[deb0f622-d296-4374-bb9d-0aeae5de894e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:02:51 localhost nova_compute[280321]: 2026-02-23 10:02:51.228 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 197 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 6.0 KiB/s rd, 57 KiB/s wr, 15 op/s Feb 23 05:02:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:02:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:02:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:02:51 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:02:51 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:02:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:02:51 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:51 localhost dnsmasq[324160]: exiting on receipt of SIGTERM Feb 23 05:02:51 localhost podman[324582]: 2026-02-23 10:02:51.936528553 +0000 UTC m=+0.062018127 container kill 260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:02:51 localhost systemd[1]: libpod-260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a.scope: Deactivated successfully. Feb 23 05:02:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:02:52 localhost podman[324595]: 2026-02-23 10:02:52.002602914 +0000 UTC m=+0.053544968 container died 260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:02:52 localhost systemd[1]: tmp-crun.2M3X33.mount: Deactivated successfully. Feb 23 05:02:52 localhost podman[324595]: 2026-02-23 10:02:52.046506977 +0000 UTC m=+0.097448991 container cleanup 260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:02:52 localhost systemd[1]: libpod-conmon-260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a.scope: Deactivated successfully. Feb 23 05:02:52 localhost podman[324602]: 2026-02-23 10:02:52.071125 +0000 UTC m=+0.107119537 container remove 260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d25b4a94-8b22-46ea-b838-1e6a8af6f7b4, org.label-schema.build-date=20260216, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 05:02:52 localhost nova_compute[280321]: 2026-02-23 10:02:52.089 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:52 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:52.122 263679 INFO neutron.agent.dhcp.agent [None req-ae191654-6044-416d-8b10-e0b78eb0b6f6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:02:52 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:02:52.122 263679 INFO neutron.agent.dhcp.agent [None req-ae191654-6044-416d-8b10-e0b78eb0b6f6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:02:52 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:02:52 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:52 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:02:52 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:02:53 localhost systemd[1]: var-lib-containers-storage-overlay-83be46e7c16057073303c95f6c9d564bd349dda94dbf7ac662f1e6aaf9489e0b-merged.mount: Deactivated successfully. Feb 23 05:02:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-260519ac2bb7bad447b17842b472d370f536bc98a1329d6760f5884c66a5954a-userdata-shm.mount: Deactivated successfully. Feb 23 05:02:53 localhost systemd[1]: run-netns-qdhcp\x2dd25b4a94\x2d8b22\x2d46ea\x2db838\x2d1e6a8af6f7b4.mount: Deactivated successfully. Feb 23 05:02:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 197 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 6.5 KiB/s rd, 56 KiB/s wr, 15 op/s Feb 23 05:02:53 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e221 e221: 6 total, 6 up, 6 in Feb 23 05:02:53 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "20902d73-7434-438f-9b7e-d3fbd0c8aa20", "snap_name": "a9baadce-a22e-41ca-bf91-5533058fa60f", "format": "json"}]: dispatch Feb 23 05:02:53 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a9baadce-a22e-41ca-bf91-5533058fa60f, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:02:53 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a9baadce-a22e-41ca-bf91-5533058fa60f, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:02:54 localhost nova_compute[280321]: 2026-02-23 10:02:54.314 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11", "format": "json"}]: dispatch Feb 23 05:02:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:54 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:54.960+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11' of type subvolume Feb 23 05:02:54 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11' of type subvolume Feb 23 05:02:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11", "force": true, "format": "json"}]: dispatch Feb 23 05:02:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11, vol_name:cephfs) < "" Feb 23 05:02:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11'' moved to trashcan Feb 23 05:02:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a6060c0-ab8f-4a4f-a7a8-36c764fc6c11, vol_name:cephfs) < "" Feb 23 05:02:55 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:02:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:02:55 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:02:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 23 05:02:55 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:02:55 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:02:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 197 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 8.1 KiB/s rd, 51 KiB/s wr, 17 op/s Feb 23 05:02:55 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c65f4d3c-90b6-4215-892b-9d6eb1a375b1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta.tmp' Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta.tmp' to config b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta' Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:02:55 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c65f4d3c-90b6-4215-892b-9d6eb1a375b1", "format": "json"}]: dispatch Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:02:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:02:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:02:55 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:02:55 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:02:55 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:02:55 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 23 05:02:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:02:55 localhost podman[324626]: 2026-02-23 10:02:55.996862875 +0000 UTC m=+0.071340743 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 05:02:56 localhost podman[324626]: 2026-02-23 10:02:56.007846491 +0000 UTC m=+0.082324409 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 05:02:56 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:02:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:02:57 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e222 e222: 6 total, 6 up, 6 in Feb 23 05:02:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "20902d73-7434-438f-9b7e-d3fbd0c8aa20", "snap_name": "a9baadce-a22e-41ca-bf91-5533058fa60f", "target_sub_name": "1730ed94-3e57-4cba-99cd-1fafcc2f97aa", "format": "json"}]: dispatch Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:a9baadce-a22e-41ca-bf91-5533058fa60f, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, target_sub_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, vol_name:cephfs) < "" Feb 23 05:02:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 198 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 122 KiB/s wr, 91 op/s Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta.tmp' Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta.tmp' to config b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta' Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 0cd03410-8166-42b2-b0ad-514a124bc9cb for path b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa' Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta.tmp' Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta.tmp' to config b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta' Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:a9baadce-a22e-41ca-bf91-5533058fa60f, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, target_sub_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, vol_name:cephfs) < "" Feb 23 05:02:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1730ed94-3e57-4cba-99cd-1fafcc2f97aa", "format": "json"}]: dispatch Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:57.472+0000 7fc3c04b9640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:57.472+0000 7fc3c04b9640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:57.472+0000 7fc3c04b9640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:57.472+0000 7fc3c04b9640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:57.472+0000 7fc3c04b9640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 1730ed94-3e57-4cba-99cd-1fafcc2f97aa) Feb 23 05:02:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:57.503+0000 7fc3becb6640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:57.503+0000 7fc3becb6640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:57.503+0000 7fc3becb6640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:57.503+0000 7fc3becb6640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:02:57.503+0000 7fc3becb6640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 1730ed94-3e57-4cba-99cd-1fafcc2f97aa) -- by 0 seconds Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta.tmp' Feb 23 05:02:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta.tmp' to config b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta' Feb 23 05:02:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:02:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:02:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:02:58 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/475540389' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:02:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:02:58 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/475540389' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:02:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 198 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 71 KiB/s wr, 76 op/s Feb 23 05:02:59 localhost nova_compute[280321]: 2026-02-23 10:02:59.319 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:02:59 localhost nova_compute[280321]: 2026-02-23 10:02:59.321 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:02:59 localhost nova_compute[280321]: 2026-02-23 10:02:59.321 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:02:59 localhost nova_compute[280321]: 2026-02-23 10:02:59.322 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:02:59 localhost nova_compute[280321]: 2026-02-23 10:02:59.351 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:02:59 localhost nova_compute[280321]: 2026-02-23 10:02:59.352 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:03:00 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:00 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.snap/a9baadce-a22e-41ca-bf91-5533058fa60f/8235a558-2471-4fcf-be01-95367027debf' to b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/111535d7-6752-451b-9242-9a0c749ae7d7' Feb 23 05:03:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:03:00 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta.tmp' Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta.tmp' to config b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta' Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.clone_index] untracking 0cd03410-8166-42b2-b0ad-514a124bc9cb Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta.tmp' Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta.tmp' to config b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta' Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta.tmp' Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta.tmp' to config b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa/.meta' Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 1730ed94-3e57-4cba-99cd-1fafcc2f97aa) Feb 23 05:03:00 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:00 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:00 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:00 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:03:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c2e06815-30f9-4d3c-bbe4-f2d82eac2683", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c2e06815-30f9-4d3c-bbe4-f2d82eac2683/.meta.tmp' Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c2e06815-30f9-4d3c-bbe4-f2d82eac2683/.meta.tmp' to config b'/volumes/_nogroup/c2e06815-30f9-4d3c-bbe4-f2d82eac2683/.meta' Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c2e06815-30f9-4d3c-bbe4-f2d82eac2683", "format": "json"}]: dispatch Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c65f4d3c-90b6-4215-892b-9d6eb1a375b1", "snap_name": "37d3cec1-9f1f-45f8-814b-ceabcde60a0c", "format": "json"}]: dispatch Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:37d3cec1-9f1f-45f8-814b-ceabcde60a0c, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:03:00 localhost sshd[324672]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:03:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:37d3cec1-9f1f-45f8-814b-ceabcde60a0c, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:03:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 198 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 132 KiB/s wr, 118 op/s Feb 23 05:03:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e223 e223: 6 total, 6 up, 6 in Feb 23 05:03:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:03:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:01 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:01.615 2 INFO neutron.agent.securitygroups_rpc [None req-b3584fe0-66dc-4b0f-aa46-a4b6d8c2c0ea a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['26548beb-0e57-409b-96fc-150c1ca0653f']#033[00m Feb 23 05:03:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:03:01 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 23 05:03:01 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:03:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:03:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:01 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:03:01 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:03:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:01 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:01.817 2 INFO neutron.agent.securitygroups_rpc [None req-f7ece8f8-128a-45ae-8798-1ff174ac4586 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['26548beb-0e57-409b-96fc-150c1ca0653f']#033[00m Feb 23 05:03:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "c65f4d3c-90b6-4215-892b-9d6eb1a375b1", "snap_name": "37d3cec1-9f1f-45f8-814b-ceabcde60a0c", "target_sub_name": "77119be1-a395-4613-8428-4049b6a55ee4", "format": "json"}]: dispatch Feb 23 05:03:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:37d3cec1-9f1f-45f8-814b-ceabcde60a0c, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, target_sub_name:77119be1-a395-4613-8428-4049b6a55ee4, vol_name:cephfs) < "" Feb 23 05:03:01 localhost openstack_network_exporter[243519]: ERROR 10:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:03:01 localhost openstack_network_exporter[243519]: Feb 23 05:03:01 localhost openstack_network_exporter[243519]: ERROR 10:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:03:01 localhost openstack_network_exporter[243519]: Feb 23 05:03:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta.tmp' Feb 23 05:03:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta.tmp' to config b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta' Feb 23 05:03:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 5083fc8c-4872-41c8-8cfb-7e5d713adf7a for path b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4' Feb 23 05:03:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta.tmp' Feb 23 05:03:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta.tmp' to config b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta' Feb 23 05:03:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:02 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:37d3cec1-9f1f-45f8-814b-ceabcde60a0c, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, target_sub_name:77119be1-a395-4613-8428-4049b6a55ee4, vol_name:cephfs) < "" Feb 23 05:03:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4 Feb 23 05:03:02 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 77119be1-a395-4613-8428-4049b6a55ee4) Feb 23 05:03:02 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "77119be1-a395-4613-8428-4049b6a55ee4", "format": "json"}]: dispatch Feb 23 05:03:02 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:77119be1-a395-4613-8428-4049b6a55ee4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:02 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:02 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:03:02 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:03:02 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 23 05:03:02 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:02.441 2 INFO neutron.agent.securitygroups_rpc [None req-0700196d-2b57-4e61-9046-6185a324d5af a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['90bc54da-a8e2-4a34-bb37-4fcf0d4c07c4']#033[00m Feb 23 05:03:02 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:02.574 2 INFO neutron.agent.securitygroups_rpc [None req-26370991-3c47-4f17-8c42-2e91914f0a91 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['90bc54da-a8e2-4a34-bb37-4fcf0d4c07c4']#033[00m Feb 23 05:03:02 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:02.718 2 INFO neutron.agent.securitygroups_rpc [None req-0212ca46-343b-4ade-95ab-95bdce90a790 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['90bc54da-a8e2-4a34-bb37-4fcf0d4c07c4']#033[00m Feb 23 05:03:02 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:02.845 2 INFO neutron.agent.securitygroups_rpc [None req-bb2604e0-3280-4b44-93fb-d763a31d90c5 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['90bc54da-a8e2-4a34-bb37-4fcf0d4c07c4']#033[00m Feb 23 05:03:02 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:02.996 2 INFO neutron.agent.securitygroups_rpc [None req-c9497ba8-e7cb-4ffb-8fd8-5a9e080024dc a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['90bc54da-a8e2-4a34-bb37-4fcf0d4c07c4']#033[00m Feb 23 05:03:03 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:03.120 2 INFO neutron.agent.securitygroups_rpc [None req-d0ec72bc-8566-4325-bbab-9dfaee70a365 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['90bc54da-a8e2-4a34-bb37-4fcf0d4c07c4']#033[00m Feb 23 05:03:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 134 KiB/s wr, 124 op/s Feb 23 05:03:03 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:03.443 2 INFO neutron.agent.securitygroups_rpc [None req-ddb1651b-3ac2-4a8f-8977-00c142537a52 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['90bc54da-a8e2-4a34-bb37-4fcf0d4c07c4']#033[00m Feb 23 05:03:03 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:03.602 2 INFO neutron.agent.securitygroups_rpc [None req-907a0d0b-deea-47b0-a1db-b229494a62fe a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['90bc54da-a8e2-4a34-bb37-4fcf0d4c07c4']#033[00m Feb 23 05:03:03 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:03.780 2 INFO neutron.agent.securitygroups_rpc [None req-bbab3356-bbf6-432a-a4a9-3be3927ef59f a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['90bc54da-a8e2-4a34-bb37-4fcf0d4c07c4']#033[00m Feb 23 05:03:03 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:03.960 2 INFO neutron.agent.securitygroups_rpc [None req-a3d9205c-35b8-4d92-aa27-e429ab8dbfe2 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['90bc54da-a8e2-4a34-bb37-4fcf0d4c07c4']#033[00m Feb 23 05:03:04 localhost nova_compute[280321]: 2026-02-23 10:03:04.352 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:04 localhost nova_compute[280321]: 2026-02-23 10:03:04.354 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:04 localhost nova_compute[280321]: 2026-02-23 10:03:04.354 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:03:04 localhost nova_compute[280321]: 2026-02-23 10:03:04.354 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:04 localhost nova_compute[280321]: 2026-02-23 10:03:04.388 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:04 localhost nova_compute[280321]: 2026-02-23 10:03:04.389 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e224 e224: 6 total, 6 up, 6 in Feb 23 05:03:04 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:04.636 2 INFO neutron.agent.securitygroups_rpc [None req-daa2f3e5-708b-419e-b743-0c985d38df1b a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['087f2ead-29df-46bd-b356-e193c3a6c3a1']#033[00m Feb 23 05:03:05 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 77119be1-a395-4613-8428-4049b6a55ee4) -- by 0 seconds Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:77119be1-a395-4613-8428-4049b6a55ee4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta.tmp' Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta.tmp' to config b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta' Feb 23 05:03:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_10:03:05 Feb 23 05:03:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 05:03:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 05:03:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['.mgr', 'manila_metadata', 'backups', 'images', 'manila_data', 'vms', 'volumes'] Feb 23 05:03:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:03:05 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c2e06815-30f9-4d3c-bbe4-f2d82eac2683", "snap_name": "9df5708b-0dc0-46f9-a0f4-8a66460e11a4", "format": "json"}]: dispatch Feb 23 05:03:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9df5708b-0dc0-46f9-a0f4-8a66460e11a4, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 64 KiB/s wr, 50 op/s Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014875629594732925 of space, bias 1.0, pg target 0.2970167375748341 quantized to 32 (current 32) Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.5445514610087475e-06 of space, bias 1.0, pg target 0.0005063657407407407 quantized to 32 (current 32) Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:03:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0004631992427414852 of space, bias 4.0, pg target 0.36870659722222227 quantized to 16 (current 16) Feb 23 05:03:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 05:03:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:03:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:03:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 05:03:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:03:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:03:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:03:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:03:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:03:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:03:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:05 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:05.781 2 INFO neutron.agent.securitygroups_rpc [None req-931492cc-c107-4f00-af23-020a7b1bfb4a a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['33c4ddfa-59ae-40a7-8c2f-cf1ffe09eb9f']#033[00m Feb 23 05:03:05 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:05.921 2 INFO neutron.agent.securitygroups_rpc [None req-8d452d14-dd60-417e-87bd-a7a2372759de a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['33c4ddfa-59ae-40a7-8c2f-cf1ffe09eb9f']#033[00m Feb 23 05:03:06 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:06.751 2 INFO neutron.agent.securitygroups_rpc [None req-43388687-9a00-4563-af08-790772899ea4 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['e9d6e743-53d6-4e9c-950f-ebadc1a82c0f']#033[00m Feb 23 05:03:06 localhost ovn_controller[155966]: 2026-02-23T10:03:06Z|00387|ovn_bfd|INFO|Disabled BFD on interface ovn-5b0126-0 Feb 23 05:03:06 localhost ovn_controller[155966]: 2026-02-23T10:03:06Z|00388|ovn_bfd|INFO|Disabled BFD on interface ovn-585d62-0 Feb 23 05:03:06 localhost ovn_controller[155966]: 2026-02-23T10:03:06Z|00389|ovn_bfd|INFO|Disabled BFD on interface ovn-b9c72d-0 Feb 23 05:03:06 localhost nova_compute[280321]: 2026-02-23 10:03:06.868 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:06 localhost nova_compute[280321]: 2026-02-23 10:03:06.871 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:06 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:06.873 2 INFO neutron.agent.securitygroups_rpc [None req-8bc11e13-0374-4d0f-a420-dc9bed7775c4 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['e9d6e743-53d6-4e9c-950f-ebadc1a82c0f']#033[00m Feb 23 05:03:06 localhost nova_compute[280321]: 2026-02-23 10:03:06.892 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:07 localhost dnsmasq[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/addn_hosts - 0 addresses Feb 23 05:03:07 localhost dnsmasq-dhcp[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/host Feb 23 05:03:07 localhost podman[324692]: 2026-02-23 10:03:07.020254225 +0000 UTC m=+0.065090292 container kill 2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78832f6d-2c11-4726-a3d2-da07de345e9e, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:03:07 localhost dnsmasq-dhcp[324383]: read /var/lib/neutron/dhcp/78832f6d-2c11-4726-a3d2-da07de345e9e/opts Feb 23 05:03:07 localhost nova_compute[280321]: 2026-02-23 10:03:07.180 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:07 localhost kernel: device tap04c735a3-87 left promiscuous mode Feb 23 05:03:07 localhost ovn_controller[155966]: 2026-02-23T10:03:07Z|00390|binding|INFO|Releasing lport 04c735a3-8797-4246-ac4c-855911861396 from this chassis (sb_readonly=0) Feb 23 05:03:07 localhost ovn_controller[155966]: 2026-02-23T10:03:07Z|00391|binding|INFO|Setting lport 04c735a3-8797-4246-ac4c-855911861396 down in Southbound Feb 23 05:03:07 localhost ovn_metadata_agent[161837]: 2026-02-23 10:03:07.189 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-78832f6d-2c11-4726-a3d2-da07de345e9e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78832f6d-2c11-4726-a3d2-da07de345e9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8187f536374b481bb4f9ac743cf0cac8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d51f4bb7-50c9-45d9-9a6a-3a1080b73b60, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=04c735a3-8797-4246-ac4c-855911861396) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:03:07 localhost ovn_metadata_agent[161837]: 2026-02-23 10:03:07.191 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 04c735a3-8797-4246-ac4c-855911861396 in datapath 78832f6d-2c11-4726-a3d2-da07de345e9e unbound from our chassis#033[00m Feb 23 05:03:07 localhost ovn_metadata_agent[161837]: 2026-02-23 10:03:07.193 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 78832f6d-2c11-4726-a3d2-da07de345e9e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:03:07 localhost ovn_metadata_agent[161837]: 2026-02-23 10:03:07.194 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2ca2cc-40ad-441e-8354-579208b07d59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:03:07 localhost nova_compute[280321]: 2026-02-23 10:03:07.203 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 93 KiB/s rd, 115 KiB/s wr, 137 op/s Feb 23 05:03:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e225 e225: 6 total, 6 up, 6 in Feb 23 05:03:07 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:07.695 2 INFO neutron.agent.securitygroups_rpc [None req-7032d099-2d57-4911-980c-bd2a477e3e37 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['b2b511b6-235a-4475-b039-adf8e8bf337f']#033[00m Feb 23 05:03:07 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:07.904 2 INFO neutron.agent.securitygroups_rpc [None req-09a1ef76-51b7-437b-b37e-bb449ab05579 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['b2b511b6-235a-4475-b039-adf8e8bf337f']#033[00m Feb 23 05:03:08 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:08.079 2 INFO neutron.agent.securitygroups_rpc [None req-6957b0df-fd0d-456d-b289-5c8ae6b2d3ab a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['b2b511b6-235a-4475-b039-adf8e8bf337f']#033[00m Feb 23 05:03:08 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:08.358 2 INFO neutron.agent.securitygroups_rpc [None req-2e64ea36-7ced-48d5-9b78-d6c2b3b8afa9 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['b2b511b6-235a-4475-b039-adf8e8bf337f']#033[00m Feb 23 05:03:08 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:08.645 2 INFO neutron.agent.securitygroups_rpc [None req-ac68416c-3f8b-4a61-93f4-c722d3303f27 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['b2b511b6-235a-4475-b039-adf8e8bf337f']#033[00m Feb 23 05:03:08 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:08.908 2 INFO neutron.agent.securitygroups_rpc [None req-b0017ab9-a450-4286-8fae-6ccd40fd4966 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['b2b511b6-235a-4475-b039-adf8e8bf337f']#033[00m Feb 23 05:03:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:03:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:03:09 localhost systemd[1]: tmp-crun.6O8tFW.mount: Deactivated successfully. Feb 23 05:03:09 localhost podman[324725]: 2026-02-23 10:03:09.035653454 +0000 UTC m=+0.096084379 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, release=1770267347, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 23 05:03:09 localhost podman[324724]: 2026-02-23 10:03:09.083484497 +0000 UTC m=+0.146715458 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 05:03:09 localhost podman[324725]: 2026-02-23 10:03:09.101092065 +0000 UTC m=+0.161522940 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, config_id=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 05:03:09 localhost dnsmasq[324383]: exiting on receipt of SIGTERM Feb 23 05:03:09 localhost podman[324749]: 2026-02-23 10:03:09.121143569 +0000 UTC m=+0.122864038 container kill 2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78832f6d-2c11-4726-a3d2-da07de345e9e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:03:09 localhost systemd[1]: libpod-2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923.scope: Deactivated successfully. Feb 23 05:03:09 localhost podman[324724]: 2026-02-23 10:03:09.166521537 +0000 UTC m=+0.229752548 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 05:03:09 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:03:09 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:03:09 localhost podman[324780]: 2026-02-23 10:03:09.195157123 +0000 UTC m=+0.060988377 container died 2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78832f6d-2c11-4726-a3d2-da07de345e9e, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0) Feb 23 05:03:09 localhost podman[324780]: 2026-02-23 10:03:09.224959754 +0000 UTC m=+0.090790978 container cleanup 2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78832f6d-2c11-4726-a3d2-da07de345e9e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:03:09 localhost systemd[1]: libpod-conmon-2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923.scope: Deactivated successfully. Feb 23 05:03:09 localhost podman[324782]: 2026-02-23 10:03:09.288382814 +0000 UTC m=+0.144408978 container remove 2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78832f6d-2c11-4726-a3d2-da07de345e9e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:03:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 53 KiB/s wr, 93 op/s Feb 23 05:03:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:03:09.331 263679 INFO neutron.agent.dhcp.agent [None req-430ab580-9131-47ea-b3c3-2b31c17f6ec3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:03:09 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:03:09.349 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:03:09 localhost nova_compute[280321]: 2026-02-23 10:03:09.442 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:09 localhost nova_compute[280321]: 2026-02-23 10:03:09.555 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:09 localhost neutron_sriov_agent[256355]: 2026-02-23 10:03:09.747 2 INFO neutron.agent.securitygroups_rpc [None req-b5ba1047-11d6-4902-ae19-bf3c18cfb931 a9be4932f1a84a8293065e9227797a47 d45d0b9da54741348d1d12c73041586e - - default default] Security group rule updated ['9ad178d0-3a41-40dd-be58-0e7ebb53d59d']#033[00m Feb 23 05:03:10 localhost systemd[1]: var-lib-containers-storage-overlay-7c53c397080025768062632091b6c32c3c9e36e51c756822bb5717443a3f3be3-merged.mount: Deactivated successfully. Feb 23 05:03:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ad4087864c0671b15545745afc05c4d7bea47db81dce3843ef35f4de65b8923-userdata-shm.mount: Deactivated successfully. Feb 23 05:03:10 localhost systemd[1]: run-netns-qdhcp\x2d78832f6d\x2d2c11\x2d4726\x2da3d2\x2dda07de345e9e.mount: Deactivated successfully. Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.snap/37d3cec1-9f1f-45f8-814b-ceabcde60a0c/057d6a1d-e1c4-4e20-8cc4-db7f977616bc' to b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/333d48b4-7a24-4e5b-a111-6a13e4a60e5b' Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9df5708b-0dc0-46f9-a0f4-8a66460e11a4, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta.tmp' Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta.tmp' to config b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta' Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.clone_index] untracking 5083fc8c-4872-41c8-8cfb-7e5d713adf7a Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta.tmp' Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta.tmp' to config b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta' Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta.tmp' Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta.tmp' to config b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4/.meta' Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 77119be1-a395-4613-8428-4049b6a55ee4) Feb 23 05:03:10 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:03:10 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:10 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice_bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:03:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:03:10 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:10 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:10 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:10 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:10 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:03:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 99 KiB/s wr, 91 op/s Feb 23 05:03:11 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:03:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:03:11 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 23 05:03:11 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:03:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:11 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:03:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:11 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:03:11 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:03:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:11 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6c95aca-bd80-4d48-87fc-8bc6f3370a1f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6c95aca-bd80-4d48-87fc-8bc6f3370a1f, vol_name:cephfs) < "" Feb 23 05:03:12 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f6c95aca-bd80-4d48-87fc-8bc6f3370a1f/.meta.tmp' Feb 23 05:03:12 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6c95aca-bd80-4d48-87fc-8bc6f3370a1f/.meta.tmp' to config b'/volumes/_nogroup/f6c95aca-bd80-4d48-87fc-8bc6f3370a1f/.meta' Feb 23 05:03:12 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6c95aca-bd80-4d48-87fc-8bc6f3370a1f, vol_name:cephfs) < "" Feb 23 05:03:12 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6c95aca-bd80-4d48-87fc-8bc6f3370a1f", "format": "json"}]: dispatch Feb 23 05:03:12 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6c95aca-bd80-4d48-87fc-8bc6f3370a1f, vol_name:cephfs) < "" Feb 23 05:03:12 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6c95aca-bd80-4d48-87fc-8bc6f3370a1f, vol_name:cephfs) < "" Feb 23 05:03:12 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:12 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:03:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:03:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 23 05:03:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e226 e226: 6 total, 6 up, 6 in Feb 23 05:03:12 localhost podman[241086]: time="2026-02-23T10:03:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:03:12 localhost podman[241086]: @ - - [23/Feb/2026:10:03:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:03:12 localhost podman[241086]: @ - - [23/Feb/2026:10:03:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17827 "" "Go-http-client/1.1" Feb 23 05:03:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:03:13 localhost podman[324811]: 2026-02-23 10:03:13.007689785 +0000 UTC m=+0.083637289 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 05:03:13 localhost podman[324811]: 2026-02-23 10:03:13.067183965 +0000 UTC m=+0.143131459 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Feb 23 05:03:13 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:03:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c2e06815-30f9-4d3c-bbe4-f2d82eac2683", "snap_name": "9df5708b-0dc0-46f9-a0f4-8a66460e11a4_90e57bd5-ed99-4502-a6ff-31eb33e53be8", "force": true, "format": "json"}]: dispatch Feb 23 05:03:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9df5708b-0dc0-46f9-a0f4-8a66460e11a4_90e57bd5-ed99-4502-a6ff-31eb33e53be8, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c2e06815-30f9-4d3c-bbe4-f2d82eac2683/.meta.tmp' Feb 23 05:03:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c2e06815-30f9-4d3c-bbe4-f2d82eac2683/.meta.tmp' to config b'/volumes/_nogroup/c2e06815-30f9-4d3c-bbe4-f2d82eac2683/.meta' Feb 23 05:03:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9df5708b-0dc0-46f9-a0f4-8a66460e11a4_90e57bd5-ed99-4502-a6ff-31eb33e53be8, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c2e06815-30f9-4d3c-bbe4-f2d82eac2683", "snap_name": "9df5708b-0dc0-46f9-a0f4-8a66460e11a4", "force": true, "format": "json"}]: dispatch Feb 23 05:03:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9df5708b-0dc0-46f9-a0f4-8a66460e11a4, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c2e06815-30f9-4d3c-bbe4-f2d82eac2683/.meta.tmp' Feb 23 05:03:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c2e06815-30f9-4d3c-bbe4-f2d82eac2683/.meta.tmp' to config b'/volumes/_nogroup/c2e06815-30f9-4d3c-bbe4-f2d82eac2683/.meta' Feb 23 05:03:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9df5708b-0dc0-46f9-a0f4-8a66460e11a4, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 99 KiB/s wr, 92 op/s Feb 23 05:03:14 localhost nova_compute[280321]: 2026-02-23 10:03:14.487 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:14 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:03:14 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:03:14 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:14 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice_bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:03:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:03:14 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:14 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:15 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cbf0f0f5-8772-4c98-89a6-f866d6a9652c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:15 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cbf0f0f5-8772-4c98-89a6-f866d6a9652c, vol_name:cephfs) < "" Feb 23 05:03:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cbf0f0f5-8772-4c98-89a6-f866d6a9652c/.meta.tmp' Feb 23 05:03:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cbf0f0f5-8772-4c98-89a6-f866d6a9652c/.meta.tmp' to config b'/volumes/_nogroup/cbf0f0f5-8772-4c98-89a6-f866d6a9652c/.meta' Feb 23 05:03:15 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cbf0f0f5-8772-4c98-89a6-f866d6a9652c, vol_name:cephfs) < "" Feb 23 05:03:15 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cbf0f0f5-8772-4c98-89a6-f866d6a9652c", "format": "json"}]: dispatch Feb 23 05:03:15 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cbf0f0f5-8772-4c98-89a6-f866d6a9652c, vol_name:cephfs) < "" Feb 23 05:03:15 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cbf0f0f5-8772-4c98-89a6-f866d6a9652c, vol_name:cephfs) < "" Feb 23 05:03:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 48 KiB/s wr, 5 op/s Feb 23 05:03:15 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:15 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:03:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e227 e227: 6 total, 6 up, 6 in Feb 23 05:03:16 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c2e06815-30f9-4d3c-bbe4-f2d82eac2683", "format": "json"}]: dispatch Feb 23 05:03:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:16.340+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2e06815-30f9-4d3c-bbe4-f2d82eac2683' of type subvolume Feb 23 05:03:16 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2e06815-30f9-4d3c-bbe4-f2d82eac2683' of type subvolume Feb 23 05:03:16 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c2e06815-30f9-4d3c-bbe4-f2d82eac2683", "force": true, "format": "json"}]: dispatch Feb 23 05:03:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:16 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c2e06815-30f9-4d3c-bbe4-f2d82eac2683'' moved to trashcan Feb 23 05:03:16 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2e06815-30f9-4d3c-bbe4-f2d82eac2683, vol_name:cephfs) < "" Feb 23 05:03:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 123 KiB/s wr, 14 op/s Feb 23 05:03:17 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1730ed94-3e57-4cba-99cd-1fafcc2f97aa", "format": "json"}]: dispatch Feb 23 05:03:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 75 KiB/s wr, 10 op/s Feb 23 05:03:19 localhost nova_compute[280321]: 2026-02-23 10:03:19.489 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:19 localhost nova_compute[280321]: 2026-02-23 10:03:19.491 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:19 localhost nova_compute[280321]: 2026-02-23 10:03:19.491 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:03:19 localhost nova_compute[280321]: 2026-02-23 10:03:19.491 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:19 localhost nova_compute[280321]: 2026-02-23 10:03:19.537 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:19 localhost nova_compute[280321]: 2026-02-23 10:03:19.537 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1730ed94-3e57-4cba-99cd-1fafcc2f97aa", "format": "json"}]: dispatch Feb 23 05:03:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, vol_name:cephfs) < "" Feb 23 05:03:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, vol_name:cephfs) < "" Feb 23 05:03:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "77119be1-a395-4613-8428-4049b6a55ee4", "format": "json"}]: dispatch Feb 23 05:03:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:77119be1-a395-4613-8428-4049b6a55ee4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:03:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:03:21 localhost podman[324835]: 2026-02-23 10:03:21.26082254 +0000 UTC m=+0.114334648 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:03:21 localhost podman[324835]: 2026-02-23 10:03:21.264835553 +0000 UTC m=+0.118347691 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:03:21 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:03:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 3.6 KiB/s rd, 92 KiB/s wr, 14 op/s Feb 23 05:03:21 localhost podman[324853]: 2026-02-23 10:03:21.373465375 +0000 UTC m=+0.104234359 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:03:21 localhost snmpd[68131]: empty variable list in _query Feb 23 05:03:21 localhost podman[324853]: 2026-02-23 10:03:21.384322267 +0000 UTC m=+0.115091231 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 05:03:21 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:03:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e228 e228: 6 total, 6 up, 6 in Feb 23 05:03:22 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e229 e229: 6 total, 6 up, 6 in Feb 23 05:03:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 5.3 KiB/s rd, 107 KiB/s wr, 18 op/s Feb 23 05:03:24 localhost nova_compute[280321]: 2026-02-23 10:03:24.539 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:24 localhost nova_compute[280321]: 2026-02-23 10:03:24.540 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:24 localhost nova_compute[280321]: 2026-02-23 10:03:24.541 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:03:24 localhost nova_compute[280321]: 2026-02-23 10:03:24.541 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:24 localhost nova_compute[280321]: 2026-02-23 10:03:24.574 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:24 localhost nova_compute[280321]: 2026-02-23 10:03:24.574 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:77119be1-a395-4613-8428-4049b6a55ee4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "77119be1-a395-4613-8428-4049b6a55ee4", "format": "json"}]: dispatch Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:77119be1-a395-4613-8428-4049b6a55ee4, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:77119be1-a395-4613-8428-4049b6a55ee4, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cbf0f0f5-8772-4c98-89a6-f866d6a9652c", "format": "json"}]: dispatch Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cbf0f0f5-8772-4c98-89a6-f866d6a9652c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cbf0f0f5-8772-4c98-89a6-f866d6a9652c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:25.208+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cbf0f0f5-8772-4c98-89a6-f866d6a9652c' of type subvolume Feb 23 05:03:25 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cbf0f0f5-8772-4c98-89a6-f866d6a9652c' of type subvolume Feb 23 05:03:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cbf0f0f5-8772-4c98-89a6-f866d6a9652c", "force": true, "format": "json"}]: dispatch Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cbf0f0f5-8772-4c98-89a6-f866d6a9652c, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cbf0f0f5-8772-4c98-89a6-f866d6a9652c'' moved to trashcan Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cbf0f0f5-8772-4c98-89a6-f866d6a9652c, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 4.5 KiB/s rd, 27 KiB/s wr, 8 op/s Feb 23 05:03:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d8b8020f-4ee6-469b-affa-207d2f0b4b2c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d8b8020f-4ee6-469b-affa-207d2f0b4b2c, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d8b8020f-4ee6-469b-affa-207d2f0b4b2c/.meta.tmp' Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d8b8020f-4ee6-469b-affa-207d2f0b4b2c/.meta.tmp' to config b'/volumes/_nogroup/d8b8020f-4ee6-469b-affa-207d2f0b4b2c/.meta' Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d8b8020f-4ee6-469b-affa-207d2f0b4b2c, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d8b8020f-4ee6-469b-affa-207d2f0b4b2c", "format": "json"}]: dispatch Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d8b8020f-4ee6-469b-affa-207d2f0b4b2c, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d8b8020f-4ee6-469b-affa-207d2f0b4b2c, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1730ed94-3e57-4cba-99cd-1fafcc2f97aa", "format": "json"}]: dispatch Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1730ed94-3e57-4cba-99cd-1fafcc2f97aa", "force": true, "format": "json"}]: dispatch Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1730ed94-3e57-4cba-99cd-1fafcc2f97aa'' moved to trashcan Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1730ed94-3e57-4cba-99cd-1fafcc2f97aa, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:03:25 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 23 05:03:25 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:03:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:26 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:26 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:03:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:03:26 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 23 05:03:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e230 e230: 6 total, 6 up, 6 in Feb 23 05:03:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5fa488c6-5f64-4942-8c4d-23ef356045f5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5fa488c6-5f64-4942-8c4d-23ef356045f5, vol_name:cephfs) < "" Feb 23 05:03:26 localhost nova_compute[280321]: 2026-02-23 10:03:26.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:03:26 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5fa488c6-5f64-4942-8c4d-23ef356045f5/.meta.tmp' Feb 23 05:03:26 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5fa488c6-5f64-4942-8c4d-23ef356045f5/.meta.tmp' to config b'/volumes/_nogroup/5fa488c6-5f64-4942-8c4d-23ef356045f5/.meta' Feb 23 05:03:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5fa488c6-5f64-4942-8c4d-23ef356045f5, vol_name:cephfs) < "" Feb 23 05:03:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5fa488c6-5f64-4942-8c4d-23ef356045f5", "format": "json"}]: dispatch Feb 23 05:03:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5fa488c6-5f64-4942-8c4d-23ef356045f5, vol_name:cephfs) < "" Feb 23 05:03:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:03:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5fa488c6-5f64-4942-8c4d-23ef356045f5, vol_name:cephfs) < "" Feb 23 05:03:26 localhost systemd[1]: tmp-crun.VwkxdF.mount: Deactivated successfully. Feb 23 05:03:27 localhost podman[324873]: 2026-02-23 10:03:26.998770262 +0000 UTC m=+0.073101277 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 05:03:27 localhost podman[324873]: 2026-02-23 10:03:27.036678881 +0000 UTC m=+0.111009836 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 05:03:27 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:03:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "20902d73-7434-438f-9b7e-d3fbd0c8aa20", "snap_name": "a9baadce-a22e-41ca-bf91-5533058fa60f_7d5a3e75-c497-45e3-a6f0-07253848d71b", "force": true, "format": "json"}]: dispatch Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a9baadce-a22e-41ca-bf91-5533058fa60f_7d5a3e75-c497-45e3-a6f0-07253848d71b, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta.tmp' Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta.tmp' to config b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta' Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a9baadce-a22e-41ca-bf91-5533058fa60f_7d5a3e75-c497-45e3-a6f0-07253848d71b, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:03:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "20902d73-7434-438f-9b7e-d3fbd0c8aa20", "snap_name": "a9baadce-a22e-41ca-bf91-5533058fa60f", "force": true, "format": "json"}]: dispatch Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a9baadce-a22e-41ca-bf91-5533058fa60f, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta.tmp' Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta.tmp' to config b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20/.meta' Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a9baadce-a22e-41ca-bf91-5533058fa60f, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:03:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 173 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 88 KiB/s wr, 88 op/s Feb 23 05:03:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:03:27 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:03:27 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:03:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:03:27 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:27 localhost nova_compute[280321]: 2026-02-23 10:03:27.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:03:27 localhost nova_compute[280321]: 2026-02-23 10:03:27.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 05:03:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9f8ed63b-f013-41f1-9020-ae061837f09e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f8ed63b-f013-41f1-9020-ae061837f09e, vol_name:cephfs) < "" Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9f8ed63b-f013-41f1-9020-ae061837f09e/.meta.tmp' Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9f8ed63b-f013-41f1-9020-ae061837f09e/.meta.tmp' to config b'/volumes/_nogroup/9f8ed63b-f013-41f1-9020-ae061837f09e/.meta' Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f8ed63b-f013-41f1-9020-ae061837f09e, vol_name:cephfs) < "" Feb 23 05:03:28 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9f8ed63b-f013-41f1-9020-ae061837f09e", "format": "json"}]: dispatch Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f8ed63b-f013-41f1-9020-ae061837f09e, vol_name:cephfs) < "" Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f8ed63b-f013-41f1-9020-ae061837f09e, vol_name:cephfs) < "" Feb 23 05:03:28 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6c95aca-bd80-4d48-87fc-8bc6f3370a1f", "format": "json"}]: dispatch Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f6c95aca-bd80-4d48-87fc-8bc6f3370a1f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f6c95aca-bd80-4d48-87fc-8bc6f3370a1f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:28 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:28.121+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6c95aca-bd80-4d48-87fc-8bc6f3370a1f' of type subvolume Feb 23 05:03:28 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6c95aca-bd80-4d48-87fc-8bc6f3370a1f' of type subvolume Feb 23 05:03:28 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6c95aca-bd80-4d48-87fc-8bc6f3370a1f", "force": true, "format": "json"}]: dispatch Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6c95aca-bd80-4d48-87fc-8bc6f3370a1f, vol_name:cephfs) < "" Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f6c95aca-bd80-4d48-87fc-8bc6f3370a1f'' moved to trashcan Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:28 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6c95aca-bd80-4d48-87fc-8bc6f3370a1f, vol_name:cephfs) < "" Feb 23 05:03:28 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:03:28 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:28 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:28 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.915 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.916 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.916 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.935 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.936 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.936 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.936 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 05:03:28 localhost nova_compute[280321]: 2026-02-23 10:03:28.937 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:03:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 173 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 70 KiB/s wr, 71 op/s Feb 23 05:03:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:03:29 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/26541569' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:03:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain.devices.0}] v 0) Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.371 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:03:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626465.localdomain}] v 0) Feb 23 05:03:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain.devices.0}] v 0) Feb 23 05:03:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain.devices.0}] v 0) Feb 23 05:03:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626466.localdomain}] v 0) Feb 23 05:03:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005626463.localdomain}] v 0) Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.576 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.578 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.578 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.578 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.617 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.618 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.629 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.630 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11585MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.631 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.631 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.715 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.716 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 05:03:29 localhost nova_compute[280321]: 2026-02-23 10:03:29.734 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:03:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "5fa488c6-5f64-4942-8c4d-23ef356045f5", "new_size": 2147483648, "format": "json"}]: dispatch Feb 23 05:03:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:5fa488c6-5f64-4942-8c4d-23ef356045f5, vol_name:cephfs) < "" Feb 23 05:03:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:5fa488c6-5f64-4942-8c4d-23ef356045f5, vol_name:cephfs) < "" Feb 23 05:03:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:03:30 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3668932612' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:03:30 localhost nova_compute[280321]: 2026-02-23 10:03:30.179 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:03:30 localhost nova_compute[280321]: 2026-02-23 10:03:30.187 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 05:03:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "20902d73-7434-438f-9b7e-d3fbd0c8aa20", "format": "json"}]: dispatch Feb 23 05:03:30 localhost nova_compute[280321]: 2026-02-23 10:03:30.211 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 05:03:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:30 localhost nova_compute[280321]: 2026-02-23 10:03:30.215 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 05:03:30 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:03:30 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:03:30 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:03:30 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:03:30 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:03:30 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:03:30 localhost nova_compute[280321]: 2026-02-23 10:03:30.216 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:03:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:30 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:30.277+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '20902d73-7434-438f-9b7e-d3fbd0c8aa20' of type subvolume Feb 23 05:03:30 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '20902d73-7434-438f-9b7e-d3fbd0c8aa20' of type subvolume Feb 23 05:03:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "20902d73-7434-438f-9b7e-d3fbd0c8aa20", "force": true, "format": "json"}]: dispatch Feb 23 05:03:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:03:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/20902d73-7434-438f-9b7e-d3fbd0c8aa20'' moved to trashcan Feb 23 05:03:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:20902d73-7434-438f-9b7e-d3fbd0c8aa20, vol_name:cephfs) < "" Feb 23 05:03:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 05:03:30 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 05:03:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 05:03:30 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:03:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 05:03:30 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 8280d222-4c83-4b07-93e8-d5db90e6442d (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:03:30 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 8280d222-4c83-4b07-93e8-d5db90e6442d (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:03:30 localhost ceph-mgr[285904]: [progress INFO root] Completed event 8280d222-4c83-4b07-93e8-d5db90e6442d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 05:03:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 05:03:30 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 05:03:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e231 e231: 6 total, 6 up, 6 in Feb 23 05:03:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:30 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 05:03:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 05:03:31 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:03:31 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:03:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 23 05:03:31 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:31 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:31 localhost systemd-journald[48305]: Data hash table of /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Feb 23 05:03:31 localhost systemd-journald[48305]: /run/log/journal/c0212a8b024a111cfc61293864f36c87/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 05:03:31 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:31 localhost nova_compute[280321]: 2026-02-23 10:03:31.215 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:03:31 localhost nova_compute[280321]: 2026-02-23 10:03:31.216 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:03:31 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "564c85df-1944-453b-837f-3ce015c7f964", "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:564c85df-1944-453b-837f-3ce015c7f964, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 23 05:03:31 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:03:31 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:03:31 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:03:31 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:03:31 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:03:31 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:03:31 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 23 05:03:31 localhost nova_compute[280321]: 2026-02-23 10:03:31.241 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:03:31 localhost nova_compute[280321]: 2026-02-23 10:03:31.241 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:564c85df-1944-453b-837f-3ce015c7f964, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 23 05:03:31 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 23 05:03:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 173 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 159 KiB/s wr, 93 op/s Feb 23 05:03:31 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "06a68059-7c6e-4e80-9728-910ba3a5d245", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/06a68059-7c6e-4e80-9728-910ba3a5d245/.meta.tmp' Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/06a68059-7c6e-4e80-9728-910ba3a5d245/.meta.tmp' to config b'/volumes/_nogroup/06a68059-7c6e-4e80-9728-910ba3a5d245/.meta' Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:31 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "06a68059-7c6e-4e80-9728-910ba3a5d245", "format": "json"}]: dispatch Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:31 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9f8ed63b-f013-41f1-9020-ae061837f09e", "format": "json"}]: dispatch Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9f8ed63b-f013-41f1-9020-ae061837f09e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9f8ed63b-f013-41f1-9020-ae061837f09e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:31 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:31.732+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f8ed63b-f013-41f1-9020-ae061837f09e' of type subvolume Feb 23 05:03:31 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f8ed63b-f013-41f1-9020-ae061837f09e' of type subvolume Feb 23 05:03:31 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9f8ed63b-f013-41f1-9020-ae061837f09e", "force": true, "format": "json"}]: dispatch Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f8ed63b-f013-41f1-9020-ae061837f09e, vol_name:cephfs) < "" Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9f8ed63b-f013-41f1-9020-ae061837f09e'' moved to trashcan Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f8ed63b-f013-41f1-9020-ae061837f09e, vol_name:cephfs) < "" Feb 23 05:03:31 localhost openstack_network_exporter[243519]: ERROR 10:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:03:31 localhost openstack_network_exporter[243519]: Feb 23 05:03:31 localhost openstack_network_exporter[243519]: ERROR 10:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:03:31 localhost openstack_network_exporter[243519]: Feb 23 05:03:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 160 KiB/s wr, 99 op/s Feb 23 05:03:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "564c85df-1944-453b-837f-3ce015c7f964", "force": true, "format": "json"}]: dispatch Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:564c85df-1944-453b-837f-3ce015c7f964, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:564c85df-1944-453b-837f-3ce015c7f964, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 23 05:03:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5fa488c6-5f64-4942-8c4d-23ef356045f5", "format": "json"}]: dispatch Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5fa488c6-5f64-4942-8c4d-23ef356045f5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5fa488c6-5f64-4942-8c4d-23ef356045f5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:34 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:34.474+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5fa488c6-5f64-4942-8c4d-23ef356045f5' of type subvolume Feb 23 05:03:34 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5fa488c6-5f64-4942-8c4d-23ef356045f5' of type subvolume Feb 23 05:03:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5fa488c6-5f64-4942-8c4d-23ef356045f5", "force": true, "format": "json"}]: dispatch Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5fa488c6-5f64-4942-8c4d-23ef356045f5, vol_name:cephfs) < "" Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5fa488c6-5f64-4942-8c4d-23ef356045f5'' moved to trashcan Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5fa488c6-5f64-4942-8c4d-23ef356045f5, vol_name:cephfs) < "" Feb 23 05:03:34 localhost nova_compute[280321]: 2026-02-23 10:03:34.618 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:03:34 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:03:34 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:03:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:03:34 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:34 localhost nova_compute[280321]: 2026-02-23 10:03:34.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:03:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "06a68059-7c6e-4e80-9728-910ba3a5d245", "snap_name": "55825af0-7142-4a33-bd75-4716e6819cc0", "format": "json"}]: dispatch Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:55825af0-7142-4a33-bd75-4716e6819cc0, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:55825af0-7142-4a33-bd75-4716e6819cc0, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:03:35 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8ca6198b-7076-4a80-86fe-294603711c42", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8ca6198b-7076-4a80-86fe-294603711c42, vol_name:cephfs) < "" Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8ca6198b-7076-4a80-86fe-294603711c42/.meta.tmp' Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ca6198b-7076-4a80-86fe-294603711c42/.meta.tmp' to config b'/volumes/_nogroup/8ca6198b-7076-4a80-86fe-294603711c42/.meta' Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8ca6198b-7076-4a80-86fe-294603711c42, vol_name:cephfs) < "" Feb 23 05:03:35 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8ca6198b-7076-4a80-86fe-294603711c42", "format": "json"}]: dispatch Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8ca6198b-7076-4a80-86fe-294603711c42, vol_name:cephfs) < "" Feb 23 05:03:35 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8ca6198b-7076-4a80-86fe-294603711c42, vol_name:cephfs) < "" Feb 23 05:03:35 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:03:35 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:35 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:35 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:03:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 89 KiB/s wr, 38 op/s Feb 23 05:03:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 142 KiB/s wr, 35 op/s Feb 23 05:03:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e232 e232: 6 total, 6 up, 6 in Feb 23 05:03:37 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "c79a6bcf-2cb2-4f61-9af5-279ad7119641", "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:c79a6bcf-2cb2-4f61-9af5-279ad7119641, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 23 05:03:37 localhost sshd[325084]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:03:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:c79a6bcf-2cb2-4f61-9af5-279ad7119641, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 23 05:03:38 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:03:38 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:38 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:03:38 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:03:38 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 23 05:03:38 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:03:38 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:38 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:03:38 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:38 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:03:38 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:03:38 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:03:38 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:03:38 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 23 05:03:38 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:03:38 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:38 localhost ovn_metadata_agent[161837]: 2026-02-23 10:03:38.997 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:03:38 localhost ovn_metadata_agent[161837]: 2026-02-23 10:03:38.998 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 05:03:39 localhost nova_compute[280321]: 2026-02-23 10:03:39.031 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v526: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 160 KiB/s wr, 39 op/s Feb 23 05:03:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "06a68059-7c6e-4e80-9728-910ba3a5d245", "snap_name": "55825af0-7142-4a33-bd75-4716e6819cc0_b42c9f23-4d47-4f3f-9852-b48bf40cff7c", "force": true, "format": "json"}]: dispatch Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:55825af0-7142-4a33-bd75-4716e6819cc0_b42c9f23-4d47-4f3f-9852-b48bf40cff7c, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/06a68059-7c6e-4e80-9728-910ba3a5d245/.meta.tmp' Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/06a68059-7c6e-4e80-9728-910ba3a5d245/.meta.tmp' to config b'/volumes/_nogroup/06a68059-7c6e-4e80-9728-910ba3a5d245/.meta' Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:55825af0-7142-4a33-bd75-4716e6819cc0_b42c9f23-4d47-4f3f-9852-b48bf40cff7c, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "06a68059-7c6e-4e80-9728-910ba3a5d245", "snap_name": "55825af0-7142-4a33-bd75-4716e6819cc0", "force": true, "format": "json"}]: dispatch Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:55825af0-7142-4a33-bd75-4716e6819cc0, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:39 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e233 e233: 6 total, 6 up, 6 in Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/06a68059-7c6e-4e80-9728-910ba3a5d245/.meta.tmp' Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/06a68059-7c6e-4e80-9728-910ba3a5d245/.meta.tmp' to config b'/volumes/_nogroup/06a68059-7c6e-4e80-9728-910ba3a5d245/.meta' Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:55825af0-7142-4a33-bd75-4716e6819cc0, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:39 localhost nova_compute[280321]: 2026-02-23 10:03:39.622 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8ca6198b-7076-4a80-86fe-294603711c42", "format": "json"}]: dispatch Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8ca6198b-7076-4a80-86fe-294603711c42, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8ca6198b-7076-4a80-86fe-294603711c42, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:39 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:39.807+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8ca6198b-7076-4a80-86fe-294603711c42' of type subvolume Feb 23 05:03:39 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8ca6198b-7076-4a80-86fe-294603711c42' of type subvolume Feb 23 05:03:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8ca6198b-7076-4a80-86fe-294603711c42", "force": true, "format": "json"}]: dispatch Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8ca6198b-7076-4a80-86fe-294603711c42, vol_name:cephfs) < "" Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8ca6198b-7076-4a80-86fe-294603711c42'' moved to trashcan Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8ca6198b-7076-4a80-86fe-294603711c42, vol_name:cephfs) < "" Feb 23 05:03:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:03:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:03:40 localhost systemd[1]: tmp-crun.p5QlOF.mount: Deactivated successfully. Feb 23 05:03:40 localhost podman[325088]: 2026-02-23 10:03:40.032547417 +0000 UTC m=+0.091986054 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 05:03:40 localhost systemd[1]: tmp-crun.Kx9Hkf.mount: Deactivated successfully. Feb 23 05:03:40 localhost podman[325087]: 2026-02-23 10:03:40.077214233 +0000 UTC m=+0.136817175 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 05:03:40 localhost podman[325087]: 2026-02-23 10:03:40.083825025 +0000 UTC m=+0.143427947 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 05:03:40 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:03:40 localhost podman[325088]: 2026-02-23 10:03:40.102986601 +0000 UTC m=+0.162425288 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1770267347, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, vcs-type=git, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 05:03:40 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:03:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e234 e234: 6 total, 6 up, 6 in Feb 23 05:03:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:40 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "c79a6bcf-2cb2-4f61-9af5-279ad7119641", "force": true, "format": "json"}]: dispatch Feb 23 05:03:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:c79a6bcf-2cb2-4f61-9af5-279ad7119641, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 23 05:03:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:c79a6bcf-2cb2-4f61-9af5-279ad7119641, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 23 05:03:40 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8115884b-219c-4ac0-b085-66e7920ae15b", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:8115884b-219c-4ac0-b085-66e7920ae15b, vol_name:cephfs) < "" Feb 23 05:03:41 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115884b-219c-4ac0-b085-66e7920ae15b/.meta.tmp' Feb 23 05:03:41 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115884b-219c-4ac0-b085-66e7920ae15b/.meta.tmp' to config b'/volumes/_nogroup/8115884b-219c-4ac0-b085-66e7920ae15b/.meta' Feb 23 05:03:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:8115884b-219c-4ac0-b085-66e7920ae15b, vol_name:cephfs) < "" Feb 23 05:03:41 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8115884b-219c-4ac0-b085-66e7920ae15b", "format": "json"}]: dispatch Feb 23 05:03:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8115884b-219c-4ac0-b085-66e7920ae15b, vol_name:cephfs) < "" Feb 23 05:03:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8115884b-219c-4ac0-b085-66e7920ae15b, vol_name:cephfs) < "" Feb 23 05:03:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 191 KiB/s wr, 32 op/s Feb 23 05:03:41 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:03:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:03:41 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:41 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:03:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:03:41 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:42 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:42 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:42 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:42 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:03:42 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "06a68059-7c6e-4e80-9728-910ba3a5d245", "format": "json"}]: dispatch Feb 23 05:03:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:06a68059-7c6e-4e80-9728-910ba3a5d245, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:06a68059-7c6e-4e80-9728-910ba3a5d245, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:42 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:42.547+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '06a68059-7c6e-4e80-9728-910ba3a5d245' of type subvolume Feb 23 05:03:42 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '06a68059-7c6e-4e80-9728-910ba3a5d245' of type subvolume Feb 23 05:03:42 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "06a68059-7c6e-4e80-9728-910ba3a5d245", "force": true, "format": "json"}]: dispatch Feb 23 05:03:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:42 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e235 e235: 6 total, 6 up, 6 in Feb 23 05:03:42 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/06a68059-7c6e-4e80-9728-910ba3a5d245'' moved to trashcan Feb 23 05:03:42 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:06a68059-7c6e-4e80-9728-910ba3a5d245, vol_name:cephfs) < "" Feb 23 05:03:42 localhost podman[241086]: time="2026-02-23T10:03:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:03:42 localhost podman[241086]: @ - - [23/Feb/2026:10:03:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:03:42 localhost podman[241086]: @ - - [23/Feb/2026:10:03:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17824 "" "Go-http-client/1.1" Feb 23 05:03:43 localhost ovn_controller[155966]: 2026-02-23T10:03:43Z|00392|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Feb 23 05:03:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 87 KiB/s wr, 52 op/s Feb 23 05:03:43 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4dead0ae-6bab-4667-ac7c-3228f29e590a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:43 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4dead0ae-6bab-4667-ac7c-3228f29e590a, vol_name:cephfs) < "" Feb 23 05:03:43 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4dead0ae-6bab-4667-ac7c-3228f29e590a/.meta.tmp' Feb 23 05:03:43 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4dead0ae-6bab-4667-ac7c-3228f29e590a/.meta.tmp' to config b'/volumes/_nogroup/4dead0ae-6bab-4667-ac7c-3228f29e590a/.meta' Feb 23 05:03:43 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4dead0ae-6bab-4667-ac7c-3228f29e590a, vol_name:cephfs) < "" Feb 23 05:03:43 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4dead0ae-6bab-4667-ac7c-3228f29e590a", "format": "json"}]: dispatch Feb 23 05:03:43 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4dead0ae-6bab-4667-ac7c-3228f29e590a, vol_name:cephfs) < "" Feb 23 05:03:43 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4dead0ae-6bab-4667-ac7c-3228f29e590a, vol_name:cephfs) < "" Feb 23 05:03:43 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e236 e236: 6 total, 6 up, 6 in Feb 23 05:03:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:03:44 localhost systemd[1]: tmp-crun.H6jrYP.mount: Deactivated successfully. Feb 23 05:03:44 localhost podman[325129]: 2026-02-23 10:03:44.005497766 +0000 UTC m=+0.082093152 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 05:03:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "8115884b-219c-4ac0-b085-66e7920ae15b", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch Feb 23 05:03:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:8115884b-219c-4ac0-b085-66e7920ae15b, vol_name:cephfs) < "" Feb 23 05:03:44 localhost podman[325129]: 2026-02-23 10:03:44.111841788 +0000 UTC m=+0.188437144 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible) Feb 23 05:03:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:8115884b-219c-4ac0-b085-66e7920ae15b, vol_name:cephfs) < "" Feb 23 05:03:44 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:03:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "46bba3de-0cfe-49be-8cb3-692c9a38d3b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:46bba3de-0cfe-49be-8cb3-692c9a38d3b6, vol_name:cephfs) < "" Feb 23 05:03:44 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/46bba3de-0cfe-49be-8cb3-692c9a38d3b6/.meta.tmp' Feb 23 05:03:44 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/46bba3de-0cfe-49be-8cb3-692c9a38d3b6/.meta.tmp' to config b'/volumes/_nogroup/46bba3de-0cfe-49be-8cb3-692c9a38d3b6/.meta' Feb 23 05:03:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:46bba3de-0cfe-49be-8cb3-692c9a38d3b6, vol_name:cephfs) < "" Feb 23 05:03:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "46bba3de-0cfe-49be-8cb3-692c9a38d3b6", "format": "json"}]: dispatch Feb 23 05:03:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:46bba3de-0cfe-49be-8cb3-692c9a38d3b6, vol_name:cephfs) < "" Feb 23 05:03:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:46bba3de-0cfe-49be-8cb3-692c9a38d3b6, vol_name:cephfs) < "" Feb 23 05:03:44 localhost nova_compute[280321]: 2026-02-23 10:03:44.623 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:03:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:03:45 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 23 05:03:45 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:03:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:45 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:03:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:45 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:03:45 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:03:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 85 KiB/s wr, 51 op/s Feb 23 05:03:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:45 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:45 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:03:45 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:03:45 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 23 05:03:46 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:03:46 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/357402712' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:03:46 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:03:46 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/357402712' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:03:46 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4dead0ae-6bab-4667-ac7c-3228f29e590a", "format": "json"}]: dispatch Feb 23 05:03:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4dead0ae-6bab-4667-ac7c-3228f29e590a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4dead0ae-6bab-4667-ac7c-3228f29e590a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:46 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:46.843+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4dead0ae-6bab-4667-ac7c-3228f29e590a' of type subvolume Feb 23 05:03:46 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4dead0ae-6bab-4667-ac7c-3228f29e590a' of type subvolume Feb 23 05:03:46 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4dead0ae-6bab-4667-ac7c-3228f29e590a", "force": true, "format": "json"}]: dispatch Feb 23 05:03:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4dead0ae-6bab-4667-ac7c-3228f29e590a, vol_name:cephfs) < "" Feb 23 05:03:46 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4dead0ae-6bab-4667-ac7c-3228f29e590a'' moved to trashcan Feb 23 05:03:46 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4dead0ae-6bab-4667-ac7c-3228f29e590a, vol_name:cephfs) < "" Feb 23 05:03:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 177 KiB/s wr, 118 op/s Feb 23 05:03:47 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8115884b-219c-4ac0-b085-66e7920ae15b", "format": "json"}]: dispatch Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8115884b-219c-4ac0-b085-66e7920ae15b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8115884b-219c-4ac0-b085-66e7920ae15b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:47 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:47.398+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8115884b-219c-4ac0-b085-66e7920ae15b' of type subvolume Feb 23 05:03:47 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8115884b-219c-4ac0-b085-66e7920ae15b' of type subvolume Feb 23 05:03:47 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8115884b-219c-4ac0-b085-66e7920ae15b", "force": true, "format": "json"}]: dispatch Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8115884b-219c-4ac0-b085-66e7920ae15b, vol_name:cephfs) < "" Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8115884b-219c-4ac0-b085-66e7920ae15b'' moved to trashcan Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8115884b-219c-4ac0-b085-66e7920ae15b, vol_name:cephfs) < "" Feb 23 05:03:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e237 e237: 6 total, 6 up, 6 in Feb 23 05:03:47 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "46bba3de-0cfe-49be-8cb3-692c9a38d3b6", "format": "json"}]: dispatch Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:46bba3de-0cfe-49be-8cb3-692c9a38d3b6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:46bba3de-0cfe-49be-8cb3-692c9a38d3b6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:47 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:47.599+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '46bba3de-0cfe-49be-8cb3-692c9a38d3b6' of type subvolume Feb 23 05:03:47 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '46bba3de-0cfe-49be-8cb3-692c9a38d3b6' of type subvolume Feb 23 05:03:47 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "46bba3de-0cfe-49be-8cb3-692c9a38d3b6", "force": true, "format": "json"}]: dispatch Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:46bba3de-0cfe-49be-8cb3-692c9a38d3b6, vol_name:cephfs) < "" Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/46bba3de-0cfe-49be-8cb3-692c9a38d3b6'' moved to trashcan Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:46bba3de-0cfe-49be-8cb3-692c9a38d3b6, vol_name:cephfs) < "" Feb 23 05:03:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:03:48.000 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 05:03:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:03:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:03:48 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:48 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:03:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:03:48 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:03:48.318 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:03:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:03:48.319 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:03:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:03:48.319 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:03:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:49 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:49 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:49 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:49 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:03:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 107 KiB/s wr, 76 op/s Feb 23 05:03:49 localhost nova_compute[280321]: 2026-02-23 10:03:49.626 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:49 localhost nova_compute[280321]: 2026-02-23 10:03:49.628 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:49 localhost nova_compute[280321]: 2026-02-23 10:03:49.628 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:03:49 localhost nova_compute[280321]: 2026-02-23 10:03:49.628 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:49 localhost nova_compute[280321]: 2026-02-23 10:03:49.659 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:49 localhost nova_compute[280321]: 2026-02-23 10:03:49.660 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b28e5355-81f5-4d56-b72e-21e83f187f85", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b28e5355-81f5-4d56-b72e-21e83f187f85, vol_name:cephfs) < "" Feb 23 05:03:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e238 e238: 6 total, 6 up, 6 in Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b28e5355-81f5-4d56-b72e-21e83f187f85/.meta.tmp' Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b28e5355-81f5-4d56-b72e-21e83f187f85/.meta.tmp' to config b'/volumes/_nogroup/b28e5355-81f5-4d56-b72e-21e83f187f85/.meta' Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b28e5355-81f5-4d56-b72e-21e83f187f85, vol_name:cephfs) < "" Feb 23 05:03:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b28e5355-81f5-4d56-b72e-21e83f187f85", "format": "json"}]: dispatch Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b28e5355-81f5-4d56-b72e-21e83f187f85, vol_name:cephfs) < "" Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b28e5355-81f5-4d56-b72e-21e83f187f85, vol_name:cephfs) < "" Feb 23 05:03:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7ad99e71-8300-4ae6-902e-ece59e9f7aad", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ad99e71-8300-4ae6-902e-ece59e9f7aad, vol_name:cephfs) < "" Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7ad99e71-8300-4ae6-902e-ece59e9f7aad/.meta.tmp' Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7ad99e71-8300-4ae6-902e-ece59e9f7aad/.meta.tmp' to config b'/volumes/_nogroup/7ad99e71-8300-4ae6-902e-ece59e9f7aad/.meta' Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ad99e71-8300-4ae6-902e-ece59e9f7aad, vol_name:cephfs) < "" Feb 23 05:03:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7ad99e71-8300-4ae6-902e-ece59e9f7aad", "format": "json"}]: dispatch Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ad99e71-8300-4ae6-902e-ece59e9f7aad, vol_name:cephfs) < "" Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ad99e71-8300-4ae6-902e-ece59e9f7aad, vol_name:cephfs) < "" Feb 23 05:03:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "718d23c9-8a91-4933-afe0-96e2eaaade86", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:718d23c9-8a91-4933-afe0-96e2eaaade86, vol_name:cephfs) < "" Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/718d23c9-8a91-4933-afe0-96e2eaaade86/.meta.tmp' Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/718d23c9-8a91-4933-afe0-96e2eaaade86/.meta.tmp' to config b'/volumes/_nogroup/718d23c9-8a91-4933-afe0-96e2eaaade86/.meta' Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:718d23c9-8a91-4933-afe0-96e2eaaade86, vol_name:cephfs) < "" Feb 23 05:03:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "718d23c9-8a91-4933-afe0-96e2eaaade86", "format": "json"}]: dispatch Feb 23 05:03:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:718d23c9-8a91-4933-afe0-96e2eaaade86, vol_name:cephfs) < "" Feb 23 05:03:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:718d23c9-8a91-4933-afe0-96e2eaaade86, vol_name:cephfs) < "" Feb 23 05:03:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 181 KiB/s wr, 74 op/s Feb 23 05:03:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:03:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:03:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:03:52 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e239 e239: 6 total, 6 up, 6 in Feb 23 05:03:52 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:03:52 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:52 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 23 05:03:52 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:03:52 localhost podman[325156]: 2026-02-23 10:03:52.537692826 +0000 UTC m=+0.081371669 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 05:03:52 localhost podman[325156]: 2026-02-23 10:03:52.548322502 +0000 UTC m=+0.092001355 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:03:52 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e240 e240: 6 total, 6 up, 6 in Feb 23 05:03:52 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:52 localhost podman[325155]: 2026-02-23 10:03:52.602377225 +0000 UTC m=+0.147001588 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 05:03:52 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:03:52 localhost podman[325155]: 2026-02-23 10:03:52.612751772 +0000 UTC m=+0.157376115 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:52 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:03:52 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c2d99fde-fd9e-46a6-885a-dbda121973f1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c2d99fde-fd9e-46a6-885a-dbda121973f1/.meta.tmp' Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c2d99fde-fd9e-46a6-885a-dbda121973f1/.meta.tmp' to config b'/volumes/_nogroup/c2d99fde-fd9e-46a6-885a-dbda121973f1/.meta' Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:03:52 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c2d99fde-fd9e-46a6-885a-dbda121973f1", "format": "json"}]: dispatch Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:03:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:03:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 121 KiB/s wr, 43 op/s Feb 23 05:03:53 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:03:53 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:03:53 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:03:53 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 23 05:03:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "718d23c9-8a91-4933-afe0-96e2eaaade86", "format": "json"}]: dispatch Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:718d23c9-8a91-4933-afe0-96e2eaaade86, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:718d23c9-8a91-4933-afe0-96e2eaaade86, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:54.118+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '718d23c9-8a91-4933-afe0-96e2eaaade86' of type subvolume Feb 23 05:03:54 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '718d23c9-8a91-4933-afe0-96e2eaaade86' of type subvolume Feb 23 05:03:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "718d23c9-8a91-4933-afe0-96e2eaaade86", "force": true, "format": "json"}]: dispatch Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:718d23c9-8a91-4933-afe0-96e2eaaade86, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/718d23c9-8a91-4933-afe0-96e2eaaade86'' moved to trashcan Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:718d23c9-8a91-4933-afe0-96e2eaaade86, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:03:54 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3126635705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:03:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:03:54 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3126635705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:03:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b28e5355-81f5-4d56-b72e-21e83f187f85", "format": "json"}]: dispatch Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b28e5355-81f5-4d56-b72e-21e83f187f85, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b28e5355-81f5-4d56-b72e-21e83f187f85, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:54.473+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b28e5355-81f5-4d56-b72e-21e83f187f85' of type subvolume Feb 23 05:03:54 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b28e5355-81f5-4d56-b72e-21e83f187f85' of type subvolume Feb 23 05:03:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b28e5355-81f5-4d56-b72e-21e83f187f85", "force": true, "format": "json"}]: dispatch Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b28e5355-81f5-4d56-b72e-21e83f187f85, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b28e5355-81f5-4d56-b72e-21e83f187f85'' moved to trashcan Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b28e5355-81f5-4d56-b72e-21e83f187f85, vol_name:cephfs) < "" Feb 23 05:03:54 localhost nova_compute[280321]: 2026-02-23 10:03:54.661 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:54 localhost nova_compute[280321]: 2026-02-23 10:03:54.662 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:54 localhost nova_compute[280321]: 2026-02-23 10:03:54.662 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:03:54 localhost nova_compute[280321]: 2026-02-23 10:03:54.662 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:54 localhost nova_compute[280321]: 2026-02-23 10:03:54.702 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:54 localhost nova_compute[280321]: 2026-02-23 10:03:54.702 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7ad99e71-8300-4ae6-902e-ece59e9f7aad", "format": "json"}]: dispatch Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7ad99e71-8300-4ae6-902e-ece59e9f7aad, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7ad99e71-8300-4ae6-902e-ece59e9f7aad, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:03:54.779+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ad99e71-8300-4ae6-902e-ece59e9f7aad' of type subvolume Feb 23 05:03:54 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ad99e71-8300-4ae6-902e-ece59e9f7aad' of type subvolume Feb 23 05:03:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7ad99e71-8300-4ae6-902e-ece59e9f7aad", "force": true, "format": "json"}]: dispatch Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ad99e71-8300-4ae6-902e-ece59e9f7aad, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7ad99e71-8300-4ae6-902e-ece59e9f7aad'' moved to trashcan Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ad99e71-8300-4ae6-902e-ece59e9f7aad, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:03:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:03:54 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:54 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice_bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:03:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:03:54 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:55 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:03:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 116 KiB/s wr, 41 op/s Feb 23 05:03:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:03:55 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:55 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:55 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:03:55 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:03:56 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c2d99fde-fd9e-46a6-885a-dbda121973f1", "snap_name": "bb21444e-b36b-4d43-bc43-885dd09ac729", "format": "json"}]: dispatch Feb 23 05:03:56 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bb21444e-b36b-4d43-bc43-885dd09ac729, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:03:56 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bb21444e-b36b-4d43-bc43-885dd09ac729, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:03:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 212 KiB/s wr, 69 op/s Feb 23 05:03:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:03:57 localhost podman[325193]: 2026-02-23 10:03:57.996972243 +0000 UTC m=+0.071577420 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:03:58 localhost podman[325193]: 2026-02-23 10:03:58.003225884 +0000 UTC m=+0.077831071 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:03:58 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:03:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "77119be1-a395-4613-8428-4049b6a55ee4", "format": "json"}]: dispatch Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:77119be1-a395-4613-8428-4049b6a55ee4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:77119be1-a395-4613-8428-4049b6a55ee4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "77119be1-a395-4613-8428-4049b6a55ee4", "force": true, "format": "json"}]: dispatch Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:77119be1-a395-4613-8428-4049b6a55ee4, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/77119be1-a395-4613-8428-4049b6a55ee4'' moved to trashcan Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:77119be1-a395-4613-8428-4049b6a55ee4, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:03:58 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 23 05:03:58 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d04b36e0-00cb-49ab-bf81-b75179f44a78", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d04b36e0-00cb-49ab-bf81-b75179f44a78, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d04b36e0-00cb-49ab-bf81-b75179f44a78/.meta.tmp' Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d04b36e0-00cb-49ab-bf81-b75179f44a78/.meta.tmp' to config b'/volumes/_nogroup/d04b36e0-00cb-49ab-bf81-b75179f44a78/.meta' Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d04b36e0-00cb-49ab-bf81-b75179f44a78, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d04b36e0-00cb-49ab-bf81-b75179f44a78", "format": "json"}]: dispatch Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d04b36e0-00cb-49ab-bf81-b75179f44a78, vol_name:cephfs) < "" Feb 23 05:03:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d04b36e0-00cb-49ab-bf81-b75179f44a78, vol_name:cephfs) < "" Feb 23 05:03:59 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:03:59 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:03:59 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:03:59 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 23 05:03:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 106 KiB/s wr, 55 op/s Feb 23 05:03:59 localhost nova_compute[280321]: 2026-02-23 10:03:59.703 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:59 localhost nova_compute[280321]: 2026-02-23 10:03:59.705 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:03:59 localhost nova_compute[280321]: 2026-02-23 10:03:59.705 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:03:59 localhost nova_compute[280321]: 2026-02-23 10:03:59.705 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:59 localhost nova_compute[280321]: 2026-02-23 10:03:59.732 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:03:59 localhost nova_compute[280321]: 2026-02-23 10:03:59.733 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:03:59 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c2d99fde-fd9e-46a6-885a-dbda121973f1", "snap_name": "bb21444e-b36b-4d43-bc43-885dd09ac729_cedb6a9b-82fb-45a2-8d59-0fd8971a0ba8", "force": true, "format": "json"}]: dispatch Feb 23 05:03:59 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bb21444e-b36b-4d43-bc43-885dd09ac729_cedb6a9b-82fb-45a2-8d59-0fd8971a0ba8, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:04:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c2d99fde-fd9e-46a6-885a-dbda121973f1/.meta.tmp' Feb 23 05:04:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c2d99fde-fd9e-46a6-885a-dbda121973f1/.meta.tmp' to config b'/volumes/_nogroup/c2d99fde-fd9e-46a6-885a-dbda121973f1/.meta' Feb 23 05:04:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bb21444e-b36b-4d43-bc43-885dd09ac729_cedb6a9b-82fb-45a2-8d59-0fd8971a0ba8, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:04:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c2d99fde-fd9e-46a6-885a-dbda121973f1", "snap_name": "bb21444e-b36b-4d43-bc43-885dd09ac729", "force": true, "format": "json"}]: dispatch Feb 23 05:04:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bb21444e-b36b-4d43-bc43-885dd09ac729, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:04:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c2d99fde-fd9e-46a6-885a-dbda121973f1/.meta.tmp' Feb 23 05:04:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c2d99fde-fd9e-46a6-885a-dbda121973f1/.meta.tmp' to config b'/volumes/_nogroup/c2d99fde-fd9e-46a6-885a-dbda121973f1/.meta' Feb 23 05:04:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bb21444e-b36b-4d43-bc43-885dd09ac729, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:04:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c65f4d3c-90b6-4215-892b-9d6eb1a375b1", "snap_name": "37d3cec1-9f1f-45f8-814b-ceabcde60a0c_457e4fec-6656-4e8e-9d9d-436c4ef214e1", "force": true, "format": "json"}]: dispatch Feb 23 05:04:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:37d3cec1-9f1f-45f8-814b-ceabcde60a0c_457e4fec-6656-4e8e-9d9d-436c4ef214e1, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:04:01 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta.tmp' Feb 23 05:04:01 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta.tmp' to config b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta' Feb 23 05:04:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:37d3cec1-9f1f-45f8-814b-ceabcde60a0c_457e4fec-6656-4e8e-9d9d-436c4ef214e1, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:04:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c65f4d3c-90b6-4215-892b-9d6eb1a375b1", "snap_name": "37d3cec1-9f1f-45f8-814b-ceabcde60a0c", "force": true, "format": "json"}]: dispatch Feb 23 05:04:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:37d3cec1-9f1f-45f8-814b-ceabcde60a0c, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:04:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 148 KiB/s wr, 33 op/s Feb 23 05:04:01 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta.tmp' Feb 23 05:04:01 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta.tmp' to config b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1/.meta' Feb 23 05:04:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:37d3cec1-9f1f-45f8-814b-ceabcde60a0c, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:04:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:04:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:04:01 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:01 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice_bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:04:01 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:01 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:01 localhost openstack_network_exporter[243519]: ERROR 10:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:04:01 localhost openstack_network_exporter[243519]: Feb 23 05:04:01 localhost openstack_network_exporter[243519]: ERROR 10:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:04:01 localhost openstack_network_exporter[243519]: Feb 23 05:04:02 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:02 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:02 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:02 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:02 localhost sshd[325217]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:04:02 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e241 e241: 6 total, 6 up, 6 in Feb 23 05:04:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c2d99fde-fd9e-46a6-885a-dbda121973f1", "format": "json"}]: dispatch Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:03 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:03.299+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2d99fde-fd9e-46a6-885a-dbda121973f1' of type subvolume Feb 23 05:04:03 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2d99fde-fd9e-46a6-885a-dbda121973f1' of type subvolume Feb 23 05:04:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c2d99fde-fd9e-46a6-885a-dbda121973f1", "force": true, "format": "json"}]: dispatch Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c2d99fde-fd9e-46a6-885a-dbda121973f1'' moved to trashcan Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2d99fde-fd9e-46a6-885a-dbda121973f1, vol_name:cephfs) < "" Feb 23 05:04:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 132 KiB/s wr, 31 op/s Feb 23 05:04:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d04b36e0-00cb-49ab-bf81-b75179f44a78", "format": "json"}]: dispatch Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d04b36e0-00cb-49ab-bf81-b75179f44a78, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d04b36e0-00cb-49ab-bf81-b75179f44a78, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:03 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:03.433+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd04b36e0-00cb-49ab-bf81-b75179f44a78' of type subvolume Feb 23 05:04:03 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd04b36e0-00cb-49ab-bf81-b75179f44a78' of type subvolume Feb 23 05:04:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d04b36e0-00cb-49ab-bf81-b75179f44a78", "force": true, "format": "json"}]: dispatch Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d04b36e0-00cb-49ab-bf81-b75179f44a78, vol_name:cephfs) < "" Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d04b36e0-00cb-49ab-bf81-b75179f44a78'' moved to trashcan Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d04b36e0-00cb-49ab-bf81-b75179f44a78, vol_name:cephfs) < "" Feb 23 05:04:03 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e242 e242: 6 total, 6 up, 6 in Feb 23 05:04:03 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:04:03 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/881987299' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:04:03 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:04:03 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/881987299' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.154969) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841044155015, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2912, "num_deletes": 265, "total_data_size": 4248831, "memory_usage": 4326064, "flush_reason": "Manual Compaction"} Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841044170583, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 2771005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29252, "largest_seqno": 32158, "table_properties": {"data_size": 2759023, "index_size": 7591, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 30038, "raw_average_key_size": 22, "raw_value_size": 2733367, "raw_average_value_size": 2077, "num_data_blocks": 318, "num_entries": 1316, "num_filter_entries": 1316, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840925, "oldest_key_time": 1771840925, "file_creation_time": 1771841044, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 15670 microseconds, and 9869 cpu microseconds. Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.170636) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 2771005 bytes OK Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.170663) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.172416) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.172467) EVENT_LOG_v1 {"time_micros": 1771841044172460, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.172485) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 4234866, prev total WAL file size 4234866, number of live WAL files 2. Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.173576) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(2706KB)], [48(16MB)] Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841044173616, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 20160514, "oldest_snapshot_seqno": -1} Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 14031 keys, 18878890 bytes, temperature: kUnknown Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841044259181, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 18878890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18796514, "index_size": 46179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35141, "raw_key_size": 374462, "raw_average_key_size": 26, "raw_value_size": 18555816, "raw_average_value_size": 1322, "num_data_blocks": 1751, "num_entries": 14031, "num_filter_entries": 14031, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771841044, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.259575) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 18878890 bytes Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.261471) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.3 rd, 220.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 16.6 +0.0 blob) out(18.0 +0.0 blob), read-write-amplify(14.1) write-amplify(6.8) OK, records in: 14584, records dropped: 553 output_compression: NoCompression Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.261498) EVENT_LOG_v1 {"time_micros": 1771841044261487, "job": 28, "event": "compaction_finished", "compaction_time_micros": 85697, "compaction_time_cpu_micros": 50582, "output_level": 6, "num_output_files": 1, "total_output_size": 18878890, "num_input_records": 14584, "num_output_records": 14031, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841044262023, "job": 28, "event": "table_file_deletion", "file_number": 50} Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841044264276, "job": 28, "event": "table_file_deletion", "file_number": 48} Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.173472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.264340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.264348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.264352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.264355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:04 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:04.264358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c65f4d3c-90b6-4215-892b-9d6eb1a375b1", "format": "json"}]: dispatch Feb 23 05:04:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:04 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:04.607+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c65f4d3c-90b6-4215-892b-9d6eb1a375b1' of type subvolume Feb 23 05:04:04 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c65f4d3c-90b6-4215-892b-9d6eb1a375b1' of type subvolume Feb 23 05:04:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c65f4d3c-90b6-4215-892b-9d6eb1a375b1", "force": true, "format": "json"}]: dispatch Feb 23 05:04:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:04:04 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c65f4d3c-90b6-4215-892b-9d6eb1a375b1'' moved to trashcan Feb 23 05:04:04 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c65f4d3c-90b6-4215-892b-9d6eb1a375b1, vol_name:cephfs) < "" Feb 23 05:04:04 localhost nova_compute[280321]: 2026-02-23 10:04:04.734 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:04 localhost nova_compute[280321]: 2026-02-23 10:04:04.735 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:04 localhost nova_compute[280321]: 2026-02-23 10:04:04.736 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:04:04 localhost nova_compute[280321]: 2026-02-23 10:04:04.736 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:04 localhost nova_compute[280321]: 2026-02-23 10:04:04.761 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:04 localhost nova_compute[280321]: 2026-02-23 10:04:04.762 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:04:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:04:04 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 23 05:04:04 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:05 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_10:04:05 Feb 23 05:04:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 05:04:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 05:04:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['.mgr', 'manila_metadata', 'backups', 'volumes', 'images', 'manila_data', 'vms'] Feb 23 05:04:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:04:05 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:05 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:04:05 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:04:05 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 23 05:04:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 61 KiB/s wr, 7 op/s Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014872903289596129 of space, bias 1.0, pg target 0.296962302348936 quantized to 32 (current 32) Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.817181974688256e-06 of space, bias 1.0, pg target 0.000560619212962963 quantized to 32 (current 32) Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:04:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0008995898183044854 of space, bias 4.0, pg target 0.7160734953703703 quantized to 16 (current 16) Feb 23 05:04:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 05:04:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 05:04:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:04:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:04:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:04:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:04:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:04:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:04:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:04:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:04:05 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5f193bc0-59f1-4f26-8a9d-5945320ef049", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f193bc0-59f1-4f26-8a9d-5945320ef049/.meta.tmp' Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f193bc0-59f1-4f26-8a9d-5945320ef049/.meta.tmp' to config b'/volumes/_nogroup/5f193bc0-59f1-4f26-8a9d-5945320ef049/.meta' Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:05 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5f193bc0-59f1-4f26-8a9d-5945320ef049", "format": "json"}]: dispatch Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:05 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 183 KiB/s wr, 39 op/s Feb 23 05:04:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d8b8020f-4ee6-469b-affa-207d2f0b4b2c", "format": "json"}]: dispatch Feb 23 05:04:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d8b8020f-4ee6-469b-affa-207d2f0b4b2c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:08 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d8b8020f-4ee6-469b-affa-207d2f0b4b2c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:08 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:08.132+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd8b8020f-4ee6-469b-affa-207d2f0b4b2c' of type subvolume Feb 23 05:04:08 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd8b8020f-4ee6-469b-affa-207d2f0b4b2c' of type subvolume Feb 23 05:04:08 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d8b8020f-4ee6-469b-affa-207d2f0b4b2c", "force": true, "format": "json"}]: dispatch Feb 23 05:04:08 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d8b8020f-4ee6-469b-affa-207d2f0b4b2c, vol_name:cephfs) < "" Feb 23 05:04:08 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d8b8020f-4ee6-469b-affa-207d2f0b4b2c'' moved to trashcan Feb 23 05:04:08 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:08 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d8b8020f-4ee6-469b-affa-207d2f0b4b2c, vol_name:cephfs) < "" Feb 23 05:04:08 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:04:08 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:04:08 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:08 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:04:08 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:08 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:08 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:09 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5f193bc0-59f1-4f26-8a9d-5945320ef049", "snap_name": "1ff3f892-8978-436c-bef2-ded3032f9484", "format": "json"}]: dispatch Feb 23 05:04:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1ff3f892-8978-436c-bef2-ded3032f9484, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1ff3f892-8978-436c-bef2-ded3032f9484, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:09 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:09 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:09 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 123 KiB/s wr, 34 op/s Feb 23 05:04:09 localhost nova_compute[280321]: 2026-02-23 10:04:09.763 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:09 localhost nova_compute[280321]: 2026-02-23 10:04:09.764 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:09 localhost nova_compute[280321]: 2026-02-23 10:04:09.765 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:04:09 localhost nova_compute[280321]: 2026-02-23 10:04:09.765 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:09 localhost nova_compute[280321]: 2026-02-23 10:04:09.806 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:09 localhost nova_compute[280321]: 2026-02-23 10:04:09.806 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:04:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:04:10 localhost systemd[1]: tmp-crun.5TMfXk.mount: Deactivated successfully. Feb 23 05:04:11 localhost podman[325220]: 2026-02-23 10:04:10.999485052 +0000 UTC m=+0.076241003 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 05:04:11 localhost systemd[1]: tmp-crun.bEMGqC.mount: Deactivated successfully. Feb 23 05:04:11 localhost podman[325220]: 2026-02-23 10:04:11.01804071 +0000 UTC m=+0.094796671 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 05:04:11 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:04:11 localhost podman[325221]: 2026-02-23 10:04:11.019633168 +0000 UTC m=+0.090118767 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, vcs-type=git, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, build-date=2026-02-05T04:57:10Z, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 05:04:11 localhost podman[325221]: 2026-02-23 10:04:11.103916535 +0000 UTC m=+0.174402114 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., version=9.7, build-date=2026-02-05T04:57:10Z, release=1770267347, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 23 05:04:11 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:04:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 160 KiB/s wr, 32 op/s Feb 23 05:04:11 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:04:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:04:11 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:11 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 23 05:04:11 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:11 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:04:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:11 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:04:11 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:11 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:12 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:12 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:12 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 23 05:04:12 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e243 e243: 6 total, 6 up, 6 in Feb 23 05:04:12 localhost podman[241086]: time="2026-02-23T10:04:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:04:12 localhost podman[241086]: @ - - [23/Feb/2026:10:04:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:04:12 localhost podman[241086]: @ - - [23/Feb/2026:10:04:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17814 "" "Go-http-client/1.1" Feb 23 05:04:12 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3993960e-fb7a-4270-9064-58b550d63afb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:12 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3993960e-fb7a-4270-9064-58b550d63afb, vol_name:cephfs) < "" Feb 23 05:04:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3993960e-fb7a-4270-9064-58b550d63afb/.meta.tmp' Feb 23 05:04:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3993960e-fb7a-4270-9064-58b550d63afb/.meta.tmp' to config b'/volumes/_nogroup/3993960e-fb7a-4270-9064-58b550d63afb/.meta' Feb 23 05:04:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3993960e-fb7a-4270-9064-58b550d63afb, vol_name:cephfs) < "" Feb 23 05:04:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3993960e-fb7a-4270-9064-58b550d63afb", "format": "json"}]: dispatch Feb 23 05:04:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3993960e-fb7a-4270-9064-58b550d63afb, vol_name:cephfs) < "" Feb 23 05:04:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3993960e-fb7a-4270-9064-58b550d63afb, vol_name:cephfs) < "" Feb 23 05:04:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 144 KiB/s wr, 30 op/s Feb 23 05:04:13 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:04:13.885 263679 INFO neutron.agent.linux.ip_lib [None req-0667bc62-a1c6-4267-b3f6-9cf7d8a0150a - - - - - -] Device tapb94692e7-9a cannot be used as it has no MAC address#033[00m Feb 23 05:04:13 localhost nova_compute[280321]: 2026-02-23 10:04:13.937 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:13 localhost kernel: device tapb94692e7-9a entered promiscuous mode Feb 23 05:04:13 localhost ovn_controller[155966]: 2026-02-23T10:04:13Z|00393|binding|INFO|Claiming lport b94692e7-9abd-4b86-b5c4-a4d844bc2250 for this chassis. Feb 23 05:04:13 localhost ovn_controller[155966]: 2026-02-23T10:04:13Z|00394|binding|INFO|b94692e7-9abd-4b86-b5c4-a4d844bc2250: Claiming unknown Feb 23 05:04:13 localhost nova_compute[280321]: 2026-02-23 10:04:13.943 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:13 localhost NetworkManager[5987]: [1771841053.9470] manager: (tapb94692e7-9a): new Generic device (/org/freedesktop/NetworkManager/Devices/69) Feb 23 05:04:13 localhost systemd-udevd[325270]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:04:13 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:13.953 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-764605ff-a90f-42eb-9130-128ebe83d2d2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-764605ff-a90f-42eb-9130-128ebe83d2d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd27f607e075248a88797bba66166a911', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3fc3eca-5017-4bae-b890-8c49a10b69a8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b94692e7-9abd-4b86-b5c4-a4d844bc2250) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:04:13 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:13.954 161842 INFO neutron.agent.ovn.metadata.agent [-] Port b94692e7-9abd-4b86-b5c4-a4d844bc2250 in datapath 764605ff-a90f-42eb-9130-128ebe83d2d2 bound to our chassis#033[00m Feb 23 05:04:13 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:13.956 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 764605ff-a90f-42eb-9130-128ebe83d2d2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:04:13 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:13.957 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d7baa03f-62c8-4a69-a082-9a78f112cbfb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:04:13 localhost journal[229268]: ethtool ioctl error on tapb94692e7-9a: No such device Feb 23 05:04:13 localhost journal[229268]: ethtool ioctl error on tapb94692e7-9a: No such device Feb 23 05:04:13 localhost ovn_controller[155966]: 2026-02-23T10:04:13Z|00395|binding|INFO|Setting lport b94692e7-9abd-4b86-b5c4-a4d844bc2250 ovn-installed in OVS Feb 23 05:04:13 localhost ovn_controller[155966]: 2026-02-23T10:04:13Z|00396|binding|INFO|Setting lport b94692e7-9abd-4b86-b5c4-a4d844bc2250 up in Southbound Feb 23 05:04:13 localhost journal[229268]: ethtool ioctl error on tapb94692e7-9a: No such device Feb 23 05:04:13 localhost nova_compute[280321]: 2026-02-23 10:04:13.983 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:13 localhost journal[229268]: ethtool ioctl error on tapb94692e7-9a: No such device Feb 23 05:04:13 localhost journal[229268]: ethtool ioctl error on tapb94692e7-9a: No such device Feb 23 05:04:13 localhost journal[229268]: ethtool ioctl error on tapb94692e7-9a: No such device Feb 23 05:04:14 localhost journal[229268]: ethtool ioctl error on tapb94692e7-9a: No such device Feb 23 05:04:14 localhost journal[229268]: ethtool ioctl error on tapb94692e7-9a: No such device Feb 23 05:04:14 localhost nova_compute[280321]: 2026-02-23 10:04:14.009 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:14 localhost nova_compute[280321]: 2026-02-23 10:04:14.036 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:14 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:04:14 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:04:14 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:14 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:04:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:14 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:14 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:14 localhost nova_compute[280321]: 2026-02-23 10:04:14.808 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:14 localhost podman[325341]: Feb 23 05:04:14 localhost podman[325341]: 2026-02-23 10:04:14.857391811 +0000 UTC m=+0.084393602 container create 857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-764605ff-a90f-42eb-9130-128ebe83d2d2, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:04:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:04:14 localhost systemd[1]: Started libpod-conmon-857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e.scope. Feb 23 05:04:14 localhost podman[325341]: 2026-02-23 10:04:14.817497822 +0000 UTC m=+0.044499653 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:04:14 localhost systemd[1]: Started libcrun container. Feb 23 05:04:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04bcbe2c7d4a8554918eb88502ecf50e9f43dc288e9a7b11592d66073ecfaa2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:04:14 localhost podman[325341]: 2026-02-23 10:04:14.950187509 +0000 UTC m=+0.177189310 container init 857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-764605ff-a90f-42eb-9130-128ebe83d2d2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 05:04:14 localhost podman[325341]: 2026-02-23 10:04:14.959601268 +0000 UTC m=+0.186603049 container start 857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-764605ff-a90f-42eb-9130-128ebe83d2d2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:04:14 localhost dnsmasq[325371]: started, version 2.85 cachesize 150 Feb 23 05:04:14 localhost dnsmasq[325371]: DNS service limited to local subnets Feb 23 05:04:14 localhost dnsmasq[325371]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:04:14 localhost dnsmasq[325371]: warning: no upstream servers configured Feb 23 05:04:14 localhost dnsmasq-dhcp[325371]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:04:14 localhost dnsmasq[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/addn_hosts - 0 addresses Feb 23 05:04:14 localhost dnsmasq-dhcp[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/host Feb 23 05:04:14 localhost dnsmasq-dhcp[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/opts Feb 23 05:04:15 localhost podman[325356]: 2026-02-23 10:04:15.001827439 +0000 UTC m=+0.095296205 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:04:15 localhost podman[325356]: 2026-02-23 10:04:15.084803826 +0000 UTC m=+0.178272592 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 05:04:15 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:04:15 localhost sshd[325385]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:04:15 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:04:15.165 263679 INFO neutron.agent.dhcp.agent [None req-176e0092-713c-4b48-b5ca-fa4cbbb9bfad - - - - - -] DHCP configuration for ports {'ec296103-897a-41b6-aaac-25e849d8d40b'} is completed#033[00m Feb 23 05:04:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 141 KiB/s wr, 29 op/s Feb 23 05:04:15 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:15 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:15 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:15 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "38f81eeb-b7e7-4646-8ff6-ff03f3551f4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:15 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:38f81eeb-b7e7-4646-8ff6-ff03f3551f4f, vol_name:cephfs) < "" Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/38f81eeb-b7e7-4646-8ff6-ff03f3551f4f/.meta.tmp' Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/38f81eeb-b7e7-4646-8ff6-ff03f3551f4f/.meta.tmp' to config b'/volumes/_nogroup/38f81eeb-b7e7-4646-8ff6-ff03f3551f4f/.meta' Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:38f81eeb-b7e7-4646-8ff6-ff03f3551f4f, vol_name:cephfs) < "" Feb 23 05:04:16 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "38f81eeb-b7e7-4646-8ff6-ff03f3551f4f", "format": "json"}]: dispatch Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:38f81eeb-b7e7-4646-8ff6-ff03f3551f4f, vol_name:cephfs) < "" Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:38f81eeb-b7e7-4646-8ff6-ff03f3551f4f, vol_name:cephfs) < "" Feb 23 05:04:16 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3993960e-fb7a-4270-9064-58b550d63afb", "format": "json"}]: dispatch Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3993960e-fb7a-4270-9064-58b550d63afb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3993960e-fb7a-4270-9064-58b550d63afb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:16.222+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3993960e-fb7a-4270-9064-58b550d63afb' of type subvolume Feb 23 05:04:16 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3993960e-fb7a-4270-9064-58b550d63afb' of type subvolume Feb 23 05:04:16 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3993960e-fb7a-4270-9064-58b550d63afb", "force": true, "format": "json"}]: dispatch Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3993960e-fb7a-4270-9064-58b550d63afb, vol_name:cephfs) < "" Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3993960e-fb7a-4270-9064-58b550d63afb'' moved to trashcan Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3993960e-fb7a-4270-9064-58b550d63afb, vol_name:cephfs) < "" Feb 23 05:04:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:04:16.522 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:04:16Z, description=, device_id=36af9716-26e9-460e-b1da-f804a5b513b2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5b0cf7ab-8639-4c27-93ae-a214a56d9c8b, ip_allocation=immediate, mac_address=fa:16:3e:d5:29:fa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:04:12Z, description=, dns_domain=, id=764605ff-a90f-42eb-9130-128ebe83d2d2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPITest-1949701538-network, port_security_enabled=True, project_id=d27f607e075248a88797bba66166a911, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17692, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3529, status=ACTIVE, subnets=['6e2efa60-0113-4466-aba0-dac61d1ac797'], tags=[], tenant_id=d27f607e075248a88797bba66166a911, updated_at=2026-02-23T10:04:13Z, vlan_transparent=None, network_id=764605ff-a90f-42eb-9130-128ebe83d2d2, port_security_enabled=False, project_id=d27f607e075248a88797bba66166a911, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3541, status=DOWN, tags=[], tenant_id=d27f607e075248a88797bba66166a911, updated_at=2026-02-23T10:04:16Z on network 764605ff-a90f-42eb-9130-128ebe83d2d2#033[00m Feb 23 05:04:16 localhost dnsmasq[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/addn_hosts - 1 addresses Feb 23 05:04:16 localhost dnsmasq-dhcp[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/host Feb 23 05:04:16 localhost dnsmasq-dhcp[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/opts Feb 23 05:04:16 localhost podman[325401]: 2026-02-23 10:04:16.734207882 +0000 UTC m=+0.062028648 container kill 857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-764605ff-a90f-42eb-9130-128ebe83d2d2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2) Feb 23 05:04:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:04:16.924 263679 INFO neutron.agent.dhcp.agent [None req-9f479042-b3a4-4404-b75a-858b4ba45c9a - - - - - -] DHCP configuration for ports {'5b0cf7ab-8639-4c27-93ae-a214a56d9c8b'} is completed#033[00m Feb 23 05:04:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 9 op/s Feb 23 05:04:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:04:17 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3043800053' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:04:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:04:17 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3043800053' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:04:18 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:04:18.016 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:04:16Z, description=, device_id=36af9716-26e9-460e-b1da-f804a5b513b2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5b0cf7ab-8639-4c27-93ae-a214a56d9c8b, ip_allocation=immediate, mac_address=fa:16:3e:d5:29:fa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:04:12Z, description=, dns_domain=, id=764605ff-a90f-42eb-9130-128ebe83d2d2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPITest-1949701538-network, port_security_enabled=True, project_id=d27f607e075248a88797bba66166a911, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17692, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3529, status=ACTIVE, subnets=['6e2efa60-0113-4466-aba0-dac61d1ac797'], tags=[], tenant_id=d27f607e075248a88797bba66166a911, updated_at=2026-02-23T10:04:13Z, vlan_transparent=None, network_id=764605ff-a90f-42eb-9130-128ebe83d2d2, port_security_enabled=False, project_id=d27f607e075248a88797bba66166a911, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3541, status=DOWN, tags=[], tenant_id=d27f607e075248a88797bba66166a911, updated_at=2026-02-23T10:04:16Z on network 764605ff-a90f-42eb-9130-128ebe83d2d2#033[00m Feb 23 05:04:18 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:04:18 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 23 05:04:18 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:18 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:04:18 localhost systemd[1]: tmp-crun.zLZzz7.mount: Deactivated successfully. Feb 23 05:04:18 localhost dnsmasq[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/addn_hosts - 1 addresses Feb 23 05:04:18 localhost dnsmasq-dhcp[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/host Feb 23 05:04:18 localhost dnsmasq-dhcp[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/opts Feb 23 05:04:18 localhost podman[325438]: 2026-02-23 10:04:18.242493772 +0000 UTC m=+0.064424941 container kill 857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-764605ff-a90f-42eb-9130-128ebe83d2d2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:18 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:04:18.491 263679 INFO neutron.agent.dhcp.agent [None req-3e9f84e8-c5c9-4105-9cdf-e022d54cabb9 - - - - - -] DHCP configuration for ports {'5b0cf7ab-8639-4c27-93ae-a214a56d9c8b'} is completed#033[00m Feb 23 05:04:18 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fd4d1e61-7ed1-400d-936e-3350eedf43bb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fd4d1e61-7ed1-400d-936e-3350eedf43bb, vol_name:cephfs) < "" Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fd4d1e61-7ed1-400d-936e-3350eedf43bb/.meta.tmp' Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fd4d1e61-7ed1-400d-936e-3350eedf43bb/.meta.tmp' to config b'/volumes/_nogroup/fd4d1e61-7ed1-400d-936e-3350eedf43bb/.meta' Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fd4d1e61-7ed1-400d-936e-3350eedf43bb, vol_name:cephfs) < "" Feb 23 05:04:18 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fd4d1e61-7ed1-400d-936e-3350eedf43bb", "format": "json"}]: dispatch Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fd4d1e61-7ed1-400d-936e-3350eedf43bb, vol_name:cephfs) < "" Feb 23 05:04:18 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fd4d1e61-7ed1-400d-936e-3350eedf43bb, vol_name:cephfs) < "" Feb 23 05:04:18 localhost ovn_controller[155966]: 2026-02-23T10:04:18Z|00397|ovn_bfd|INFO|Enabled BFD on interface ovn-5b0126-0 Feb 23 05:04:18 localhost ovn_controller[155966]: 2026-02-23T10:04:18Z|00398|ovn_bfd|INFO|Enabled BFD on interface ovn-585d62-0 Feb 23 05:04:18 localhost ovn_controller[155966]: 2026-02-23T10:04:18Z|00399|ovn_bfd|INFO|Enabled BFD on interface ovn-b9c72d-0 Feb 23 05:04:18 localhost nova_compute[280321]: 2026-02-23 10:04:18.802 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:18 localhost nova_compute[280321]: 2026-02-23 10:04:18.804 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:18 localhost nova_compute[280321]: 2026-02-23 10:04:18.805 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:18 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:18 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:18 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:18 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 23 05:04:18 localhost nova_compute[280321]: 2026-02-23 10:04:18.833 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:18 localhost nova_compute[280321]: 2026-02-23 10:04:18.873 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:19 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f018492c-b089-4142-aefb-5bfccd1b64b1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f018492c-b089-4142-aefb-5bfccd1b64b1, vol_name:cephfs) < "" Feb 23 05:04:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 9 op/s Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f018492c-b089-4142-aefb-5bfccd1b64b1/.meta.tmp' Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f018492c-b089-4142-aefb-5bfccd1b64b1/.meta.tmp' to config b'/volumes/_nogroup/f018492c-b089-4142-aefb-5bfccd1b64b1/.meta' Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f018492c-b089-4142-aefb-5bfccd1b64b1, vol_name:cephfs) < "" Feb 23 05:04:19 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f018492c-b089-4142-aefb-5bfccd1b64b1", "format": "json"}]: dispatch Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f018492c-b089-4142-aefb-5bfccd1b64b1, vol_name:cephfs) < "" Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f018492c-b089-4142-aefb-5bfccd1b64b1, vol_name:cephfs) < "" Feb 23 05:04:19 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "99702c97-e987-48a2-abde-b72286105ebc", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:99702c97-e987-48a2-abde-b72286105ebc, vol_name:cephfs) < "" Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/99702c97-e987-48a2-abde-b72286105ebc/.meta.tmp' Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/99702c97-e987-48a2-abde-b72286105ebc/.meta.tmp' to config b'/volumes/_nogroup/99702c97-e987-48a2-abde-b72286105ebc/.meta' Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:99702c97-e987-48a2-abde-b72286105ebc, vol_name:cephfs) < "" Feb 23 05:04:19 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "99702c97-e987-48a2-abde-b72286105ebc", "format": "json"}]: dispatch Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:99702c97-e987-48a2-abde-b72286105ebc, vol_name:cephfs) < "" Feb 23 05:04:19 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:99702c97-e987-48a2-abde-b72286105ebc, vol_name:cephfs) < "" Feb 23 05:04:19 localhost nova_compute[280321]: 2026-02-23 10:04:19.743 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:19 localhost nova_compute[280321]: 2026-02-23 10:04:19.755 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:19 localhost nova_compute[280321]: 2026-02-23 10:04:19.811 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:20 localhost nova_compute[280321]: 2026-02-23 10:04:20.738 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:21 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:04:21 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:04:21 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:04:21 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:04:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v558: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 134 KiB/s wr, 11 op/s Feb 23 05:04:21 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:21 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:21 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:21 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:04:21 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:21 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:21 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:21 localhost ovn_controller[155966]: 2026-02-23T10:04:21Z|00400|ovn_bfd|INFO|Disabled BFD on interface ovn-5b0126-0 Feb 23 05:04:21 localhost ovn_controller[155966]: 2026-02-23T10:04:21Z|00401|ovn_bfd|INFO|Disabled BFD on interface ovn-585d62-0 Feb 23 05:04:21 localhost ovn_controller[155966]: 2026-02-23T10:04:21Z|00402|ovn_bfd|INFO|Disabled BFD on interface ovn-b9c72d-0 Feb 23 05:04:21 localhost nova_compute[280321]: 2026-02-23 10:04:21.995 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:22 localhost nova_compute[280321]: 2026-02-23 10:04:22.005 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:22 localhost systemd[1]: tmp-crun.GzjKc7.mount: Deactivated successfully. Feb 23 05:04:22 localhost dnsmasq[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/addn_hosts - 0 addresses Feb 23 05:04:22 localhost podman[325481]: 2026-02-23 10:04:22.079564896 +0000 UTC m=+0.068391172 container kill 857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-764605ff-a90f-42eb-9130-128ebe83d2d2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 05:04:22 localhost dnsmasq-dhcp[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/host Feb 23 05:04:22 localhost dnsmasq-dhcp[325371]: read /var/lib/neutron/dhcp/764605ff-a90f-42eb-9130-128ebe83d2d2/opts Feb 23 05:04:22 localhost nova_compute[280321]: 2026-02-23 10:04:22.241 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:22 localhost ovn_controller[155966]: 2026-02-23T10:04:22Z|00403|binding|INFO|Releasing lport b94692e7-9abd-4b86-b5c4-a4d844bc2250 from this chassis (sb_readonly=0) Feb 23 05:04:22 localhost ovn_controller[155966]: 2026-02-23T10:04:22Z|00404|binding|INFO|Setting lport b94692e7-9abd-4b86-b5c4-a4d844bc2250 down in Southbound Feb 23 05:04:22 localhost kernel: device tapb94692e7-9a left promiscuous mode Feb 23 05:04:22 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:22.254 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-764605ff-a90f-42eb-9130-128ebe83d2d2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-764605ff-a90f-42eb-9130-128ebe83d2d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd27f607e075248a88797bba66166a911', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3fc3eca-5017-4bae-b890-8c49a10b69a8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b94692e7-9abd-4b86-b5c4-a4d844bc2250) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:04:22 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:22.256 161842 INFO neutron.agent.ovn.metadata.agent [-] Port b94692e7-9abd-4b86-b5c4-a4d844bc2250 in datapath 764605ff-a90f-42eb-9130-128ebe83d2d2 unbound from our chassis#033[00m Feb 23 05:04:22 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:22.258 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 764605ff-a90f-42eb-9130-128ebe83d2d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:04:22 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:22.259 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[a19c76d8-cb63-4c14-b517-d24921eac230]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:04:22 localhost nova_compute[280321]: 2026-02-23 10:04:22.270 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:22 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fd4d1e61-7ed1-400d-936e-3350eedf43bb", "format": "json"}]: dispatch Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fd4d1e61-7ed1-400d-936e-3350eedf43bb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fd4d1e61-7ed1-400d-936e-3350eedf43bb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:22 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:22.359+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fd4d1e61-7ed1-400d-936e-3350eedf43bb' of type subvolume Feb 23 05:04:22 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fd4d1e61-7ed1-400d-936e-3350eedf43bb' of type subvolume Feb 23 05:04:22 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fd4d1e61-7ed1-400d-936e-3350eedf43bb", "force": true, "format": "json"}]: dispatch Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fd4d1e61-7ed1-400d-936e-3350eedf43bb, vol_name:cephfs) < "" Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fd4d1e61-7ed1-400d-936e-3350eedf43bb'' moved to trashcan Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fd4d1e61-7ed1-400d-936e-3350eedf43bb, vol_name:cephfs) < "" Feb 23 05:04:22 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f018492c-b089-4142-aefb-5bfccd1b64b1", "format": "json"}]: dispatch Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f018492c-b089-4142-aefb-5bfccd1b64b1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f018492c-b089-4142-aefb-5bfccd1b64b1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:22 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:22.731+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f018492c-b089-4142-aefb-5bfccd1b64b1' of type subvolume Feb 23 05:04:22 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f018492c-b089-4142-aefb-5bfccd1b64b1' of type subvolume Feb 23 05:04:22 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f018492c-b089-4142-aefb-5bfccd1b64b1", "force": true, "format": "json"}]: dispatch Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f018492c-b089-4142-aefb-5bfccd1b64b1, vol_name:cephfs) < "" Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f018492c-b089-4142-aefb-5bfccd1b64b1'' moved to trashcan Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f018492c-b089-4142-aefb-5bfccd1b64b1, vol_name:cephfs) < "" Feb 23 05:04:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:04:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:04:22 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "99702c97-e987-48a2-abde-b72286105ebc", "format": "json"}]: dispatch Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:99702c97-e987-48a2-abde-b72286105ebc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:99702c97-e987-48a2-abde-b72286105ebc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:22 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:22.939+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '99702c97-e987-48a2-abde-b72286105ebc' of type subvolume Feb 23 05:04:22 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '99702c97-e987-48a2-abde-b72286105ebc' of type subvolume Feb 23 05:04:22 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "99702c97-e987-48a2-abde-b72286105ebc", "force": true, "format": "json"}]: dispatch Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:99702c97-e987-48a2-abde-b72286105ebc, vol_name:cephfs) < "" Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/99702c97-e987-48a2-abde-b72286105ebc'' moved to trashcan Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:22 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:99702c97-e987-48a2-abde-b72286105ebc, vol_name:cephfs) < "" Feb 23 05:04:23 localhost podman[325502]: 2026-02-23 10:04:23.016024007 +0000 UTC m=+0.089102856 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 23 05:04:23 localhost podman[325502]: 2026-02-23 10:04:23.023199766 +0000 UTC m=+0.096278615 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:04:23 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:04:23 localhost podman[325503]: 2026-02-23 10:04:23.111765825 +0000 UTC m=+0.182609335 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 23 05:04:23 localhost podman[325503]: 2026-02-23 10:04:23.149847499 +0000 UTC m=+0.220690959 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Feb 23 05:04:23 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:04:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 380 B/s rd, 124 KiB/s wr, 10 op/s Feb 23 05:04:24 localhost dnsmasq[325371]: exiting on receipt of SIGTERM Feb 23 05:04:24 localhost podman[325554]: 2026-02-23 10:04:24.02776026 +0000 UTC m=+0.058124899 container kill 857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-764605ff-a90f-42eb-9130-128ebe83d2d2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:04:24 localhost systemd[1]: libpod-857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e.scope: Deactivated successfully. Feb 23 05:04:24 localhost podman[325567]: 2026-02-23 10:04:24.08136685 +0000 UTC m=+0.044493562 container died 857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-764605ff-a90f-42eb-9130-128ebe83d2d2, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:04:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e-userdata-shm.mount: Deactivated successfully. Feb 23 05:04:24 localhost systemd[1]: var-lib-containers-storage-overlay-d04bcbe2c7d4a8554918eb88502ecf50e9f43dc288e9a7b11592d66073ecfaa2-merged.mount: Deactivated successfully. Feb 23 05:04:24 localhost podman[325567]: 2026-02-23 10:04:24.166517534 +0000 UTC m=+0.129644216 container cleanup 857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-764605ff-a90f-42eb-9130-128ebe83d2d2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:04:24 localhost systemd[1]: libpod-conmon-857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e.scope: Deactivated successfully. Feb 23 05:04:24 localhost podman[325574]: 2026-02-23 10:04:24.1902474 +0000 UTC m=+0.142341895 container remove 857ee739e878eea5db8b5701c0b05fede3b307c9e358470fadf0cdea876c2e6e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-764605ff-a90f-42eb-9130-128ebe83d2d2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 05:04:24 localhost systemd[1]: run-netns-qdhcp\x2d764605ff\x2da90f\x2d42eb\x2d9130\x2d128ebe83d2d2.mount: Deactivated successfully. Feb 23 05:04:24 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:04:24.223 263679 INFO neutron.agent.dhcp.agent [None req-6fcf3350-cc01-4a01-8231-0159c1059bf8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:04:24 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:04:24.224 263679 INFO neutron.agent.dhcp.agent [None req-6fcf3350-cc01-4a01-8231-0159c1059bf8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:04:24 localhost nova_compute[280321]: 2026-02-23 10:04:24.311 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:04:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:24 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:04:24 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:04:24 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 23 05:04:24 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:04:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:04:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:04:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:24 localhost nova_compute[280321]: 2026-02-23 10:04:24.853 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:24 localhost nova_compute[280321]: 2026-02-23 10:04:24.855 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:24 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:04:24 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:04:24 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:04:24 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 23 05:04:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 112 KiB/s wr, 9 op/s Feb 23 05:04:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "38f81eeb-b7e7-4646-8ff6-ff03f3551f4f", "format": "json"}]: dispatch Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:38f81eeb-b7e7-4646-8ff6-ff03f3551f4f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:38f81eeb-b7e7-4646-8ff6-ff03f3551f4f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:26 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:26.036+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '38f81eeb-b7e7-4646-8ff6-ff03f3551f4f' of type subvolume Feb 23 05:04:26 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '38f81eeb-b7e7-4646-8ff6-ff03f3551f4f' of type subvolume Feb 23 05:04:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "38f81eeb-b7e7-4646-8ff6-ff03f3551f4f", "force": true, "format": "json"}]: dispatch Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:38f81eeb-b7e7-4646-8ff6-ff03f3551f4f, vol_name:cephfs) < "" Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/38f81eeb-b7e7-4646-8ff6-ff03f3551f4f'' moved to trashcan Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:38f81eeb-b7e7-4646-8ff6-ff03f3551f4f, vol_name:cephfs) < "" Feb 23 05:04:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "191b0228-9167-42d0-943c-9ccefc03c217", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:191b0228-9167-42d0-943c-9ccefc03c217, vol_name:cephfs) < "" Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/191b0228-9167-42d0-943c-9ccefc03c217/.meta.tmp' Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/191b0228-9167-42d0-943c-9ccefc03c217/.meta.tmp' to config b'/volumes/_nogroup/191b0228-9167-42d0-943c-9ccefc03c217/.meta' Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:191b0228-9167-42d0-943c-9ccefc03c217, vol_name:cephfs) < "" Feb 23 05:04:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "191b0228-9167-42d0-943c-9ccefc03c217", "format": "json"}]: dispatch Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:191b0228-9167-42d0-943c-9ccefc03c217, vol_name:cephfs) < "" Feb 23 05:04:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:191b0228-9167-42d0-943c-9ccefc03c217, vol_name:cephfs) < "" Feb 23 05:04:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 195 KiB/s wr, 15 op/s Feb 23 05:04:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:04:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:04:27 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:04:27 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:04:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:27 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:04:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:28 localhost nova_compute[280321]: 2026-02-23 10:04:28.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:28 localhost nova_compute[280321]: 2026-02-23 10:04:28.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 05:04:28 localhost nova_compute[280321]: 2026-02-23 10:04:28.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 05:04:28 localhost nova_compute[280321]: 2026-02-23 10:04:28.906 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 05:04:28 localhost nova_compute[280321]: 2026-02-23 10:04:28.906 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:28 localhost nova_compute[280321]: 2026-02-23 10:04:28.906 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:28 localhost nova_compute[280321]: 2026-02-23 10:04:28.906 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 05:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:04:29 localhost podman[325599]: 2026-02-23 10:04:29.010364547 +0000 UTC m=+0.084216857 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 05:04:29 localhost podman[325599]: 2026-02-23 10:04:29.024024775 +0000 UTC m=+0.097877065 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 05:04:29 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:04:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "13a5ec60-9f6a-43c3-975b-c3ae8bf92865", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:13a5ec60-9f6a-43c3-975b-c3ae8bf92865, vol_name:cephfs) < "" Feb 23 05:04:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 131 KiB/s wr, 10 op/s Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/13a5ec60-9f6a-43c3-975b-c3ae8bf92865/.meta.tmp' Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/13a5ec60-9f6a-43c3-975b-c3ae8bf92865/.meta.tmp' to config b'/volumes/_nogroup/13a5ec60-9f6a-43c3-975b-c3ae8bf92865/.meta' Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:13a5ec60-9f6a-43c3-975b-c3ae8bf92865, vol_name:cephfs) < "" Feb 23 05:04:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "13a5ec60-9f6a-43c3-975b-c3ae8bf92865", "format": "json"}]: dispatch Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:13a5ec60-9f6a-43c3-975b-c3ae8bf92865, vol_name:cephfs) < "" Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:13a5ec60-9f6a-43c3-975b-c3ae8bf92865, vol_name:cephfs) < "" Feb 23 05:04:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "191b0228-9167-42d0-943c-9ccefc03c217", "format": "json"}]: dispatch Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:191b0228-9167-42d0-943c-9ccefc03c217, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:191b0228-9167-42d0-943c-9ccefc03c217, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:29 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:29.514+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '191b0228-9167-42d0-943c-9ccefc03c217' of type subvolume Feb 23 05:04:29 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '191b0228-9167-42d0-943c-9ccefc03c217' of type subvolume Feb 23 05:04:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "191b0228-9167-42d0-943c-9ccefc03c217", "force": true, "format": "json"}]: dispatch Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:191b0228-9167-42d0-943c-9ccefc03c217, vol_name:cephfs) < "" Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/191b0228-9167-42d0-943c-9ccefc03c217'' moved to trashcan Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:191b0228-9167-42d0-943c-9ccefc03c217, vol_name:cephfs) < "" Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.856 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.858 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.858 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.858 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.895 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.896 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.898 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.911 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.911 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.911 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.912 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 05:04:29 localhost nova_compute[280321]: 2026-02-23 10:04:29.912 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:04:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:04:30 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1163012827' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:04:30 localhost nova_compute[280321]: 2026-02-23 10:04:30.365 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:04:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:30 localhost nova_compute[280321]: 2026-02-23 10:04:30.583 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 05:04:30 localhost nova_compute[280321]: 2026-02-23 10:04:30.585 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11583MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 05:04:30 localhost nova_compute[280321]: 2026-02-23 10:04:30.585 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:04:30 localhost nova_compute[280321]: 2026-02-23 10:04:30.586 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:04:30 localhost nova_compute[280321]: 2026-02-23 10:04:30.821 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 05:04:30 localhost nova_compute[280321]: 2026-02-23 10:04:30.822 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 05:04:30 localhost nova_compute[280321]: 2026-02-23 10:04:30.949 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing inventories for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.092 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating ProviderTree inventory for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.093 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Updating inventory in ProviderTree for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 23 05:04:31 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:04:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.115 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing aggregate associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.155 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Refreshing trait associations for resource provider 9df77b74-d7d6-46a8-93cb-cadec85557a4, traits: HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SHA,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NODE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_ABM,HW_CPU_X86_AESNI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.171 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:04:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:04:31 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:04:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 23 05:04:31 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:04:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:31 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:04:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:31 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:04:31 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:31 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 132 KiB/s wr, 12 op/s Feb 23 05:04:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:04:31 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1919430634' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.609 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.615 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.641 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.644 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.645 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:04:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 05:04:31 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 05:04:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 05:04:31 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:04:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 05:04:31 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev ed75c749-0f70-4de7-af2b-2c25ca5c4463 (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:04:31 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev ed75c749-0f70-4de7-af2b-2c25ca5c4463 (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:04:31 localhost ceph-mgr[285904]: [progress INFO root] Completed event ed75c749-0f70-4de7-af2b-2c25ca5c4463 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 05:04:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 05:04:31 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.907 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.907 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:31 localhost nova_compute[280321]: 2026-02-23 10:04:31.907 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 23 05:04:31 localhost openstack_network_exporter[243519]: ERROR 10:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:04:31 localhost openstack_network_exporter[243519]: Feb 23 05:04:31 localhost openstack_network_exporter[243519]: ERROR 10:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:04:31 localhost openstack_network_exporter[243519]: Feb 23 05:04:32 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:04:32 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:04:32 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:04:32 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 23 05:04:32 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:04:32 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.588206) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841072588247, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 845, "num_deletes": 261, "total_data_size": 1115655, "memory_usage": 1136296, "flush_reason": "Manual Compaction"} Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841072595010, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 733430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32163, "largest_seqno": 33003, "table_properties": {"data_size": 729463, "index_size": 1566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10567, "raw_average_key_size": 20, "raw_value_size": 720714, "raw_average_value_size": 1388, "num_data_blocks": 68, "num_entries": 519, "num_filter_entries": 519, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771841044, "oldest_key_time": 1771841044, "file_creation_time": 1771841072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 6848 microseconds, and 2881 cpu microseconds. Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.595055) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 733430 bytes OK Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.595077) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.597232) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.597252) EVENT_LOG_v1 {"time_micros": 1771841072597245, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.597271) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1110931, prev total WAL file size 1111255, number of live WAL files 2. Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.598744) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323730' seq:72057594037927935, type:22 .. '6C6F676D0034353235' seq:0, type:0; will stop at (end) Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(716KB)], [51(18MB)] Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841072598794, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 19612320, "oldest_snapshot_seqno": -1} Feb 23 05:04:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "13a5ec60-9f6a-43c3-975b-c3ae8bf92865", "format": "json"}]: dispatch Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:13a5ec60-9f6a-43c3-975b-c3ae8bf92865, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:13a5ec60-9f6a-43c3-975b-c3ae8bf92865, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:32 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:32.637+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '13a5ec60-9f6a-43c3-975b-c3ae8bf92865' of type subvolume Feb 23 05:04:32 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '13a5ec60-9f6a-43c3-975b-c3ae8bf92865' of type subvolume Feb 23 05:04:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "13a5ec60-9f6a-43c3-975b-c3ae8bf92865", "force": true, "format": "json"}]: dispatch Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:13a5ec60-9f6a-43c3-975b-c3ae8bf92865, vol_name:cephfs) < "" Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/13a5ec60-9f6a-43c3-975b-c3ae8bf92865'' moved to trashcan Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:13a5ec60-9f6a-43c3-975b-c3ae8bf92865, vol_name:cephfs) < "" Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 14002 keys, 19209308 bytes, temperature: kUnknown Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841072690619, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 19209308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19127014, "index_size": 46225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35013, "raw_key_size": 375300, "raw_average_key_size": 26, "raw_value_size": 18886544, "raw_average_value_size": 1348, "num_data_blocks": 1745, "num_entries": 14002, "num_filter_entries": 14002, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771841072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.690834) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 19209308 bytes Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.692501) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.4 rd, 209.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 18.0 +0.0 blob) out(18.3 +0.0 blob), read-write-amplify(52.9) write-amplify(26.2) OK, records in: 14550, records dropped: 548 output_compression: NoCompression Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.692522) EVENT_LOG_v1 {"time_micros": 1771841072692511, "job": 30, "event": "compaction_finished", "compaction_time_micros": 91901, "compaction_time_cpu_micros": 54333, "output_level": 6, "num_output_files": 1, "total_output_size": 19209308, "num_input_records": 14550, "num_output_records": 14002, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841072692723, "job": 30, "event": "table_file_deletion", "file_number": 53} Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841072694488, "job": 30, "event": "table_file_deletion", "file_number": 51} Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.598628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.694556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.694562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.694563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.694565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:04:32.694566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:04:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6efe2d5a-4b19-4449-89fe-477e42835180", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6efe2d5a-4b19-4449-89fe-477e42835180, vol_name:cephfs) < "" Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6efe2d5a-4b19-4449-89fe-477e42835180/.meta.tmp' Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6efe2d5a-4b19-4449-89fe-477e42835180/.meta.tmp' to config b'/volumes/_nogroup/6efe2d5a-4b19-4449-89fe-477e42835180/.meta' Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6efe2d5a-4b19-4449-89fe-477e42835180, vol_name:cephfs) < "" Feb 23 05:04:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6efe2d5a-4b19-4449-89fe-477e42835180", "format": "json"}]: dispatch Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6efe2d5a-4b19-4449-89fe-477e42835180, vol_name:cephfs) < "" Feb 23 05:04:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6efe2d5a-4b19-4449-89fe-477e42835180, vol_name:cephfs) < "" Feb 23 05:04:32 localhost nova_compute[280321]: 2026-02-23 10:04:32.917 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:32 localhost nova_compute[280321]: 2026-02-23 10:04:32.917 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 134 KiB/s wr, 11 op/s Feb 23 05:04:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:04:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:04:34 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:34 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice_bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:04:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:34 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:34 localhost nova_compute[280321]: 2026-02-23 10:04:34.896 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:04:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:04:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:04:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:04:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:04:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 23 05:04:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 23 05:04:35 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:35 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:35 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:35 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 134 KiB/s wr, 10 op/s Feb 23 05:04:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:35 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 05:04:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 05:04:35 localhost nova_compute[280321]: 2026-02-23 10:04:35.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:36 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6efe2d5a-4b19-4449-89fe-477e42835180", "format": "json"}]: dispatch Feb 23 05:04:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6efe2d5a-4b19-4449-89fe-477e42835180, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6efe2d5a-4b19-4449-89fe-477e42835180, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:36 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:36.085+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6efe2d5a-4b19-4449-89fe-477e42835180' of type subvolume Feb 23 05:04:36 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6efe2d5a-4b19-4449-89fe-477e42835180' of type subvolume Feb 23 05:04:36 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6efe2d5a-4b19-4449-89fe-477e42835180", "force": true, "format": "json"}]: dispatch Feb 23 05:04:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6efe2d5a-4b19-4449-89fe-477e42835180, vol_name:cephfs) < "" Feb 23 05:04:36 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6efe2d5a-4b19-4449-89fe-477e42835180'' moved to trashcan Feb 23 05:04:36 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6efe2d5a-4b19-4449-89fe-477e42835180, vol_name:cephfs) < "" Feb 23 05:04:36 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:04:37 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:04:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 209 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 192 KiB/s wr, 15 op/s Feb 23 05:04:37 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/.meta.tmp' Feb 23 05:04:37 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/.meta.tmp' to config b'/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/.meta' Feb 23 05:04:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:04:37 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "format": "json"}]: dispatch Feb 23 05:04:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:04:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:04:38 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:04:38 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:38 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:04:38 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:38 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 23 05:04:38 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:04:38 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:38 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:04:38 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:38 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:04:38 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:38 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:38 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:38 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:04:38 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:04:38 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 23 05:04:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c2ab3a9d-9959-4847-94e0-7e993629469a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c2ab3a9d-9959-4847-94e0-7e993629469a, vol_name:cephfs) < "" Feb 23 05:04:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 209 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 109 KiB/s wr, 9 op/s Feb 23 05:04:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c2ab3a9d-9959-4847-94e0-7e993629469a/.meta.tmp' Feb 23 05:04:39 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c2ab3a9d-9959-4847-94e0-7e993629469a/.meta.tmp' to config b'/volumes/_nogroup/c2ab3a9d-9959-4847-94e0-7e993629469a/.meta' Feb 23 05:04:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c2ab3a9d-9959-4847-94e0-7e993629469a, vol_name:cephfs) < "" Feb 23 05:04:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c2ab3a9d-9959-4847-94e0-7e993629469a", "format": "json"}]: dispatch Feb 23 05:04:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2ab3a9d-9959-4847-94e0-7e993629469a, vol_name:cephfs) < "" Feb 23 05:04:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2ab3a9d-9959-4847-94e0-7e993629469a, vol_name:cephfs) < "" Feb 23 05:04:39 localhost nova_compute[280321]: 2026-02-23 10:04:39.900 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:39 localhost nova_compute[280321]: 2026-02-23 10:04:39.902 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:39 localhost nova_compute[280321]: 2026-02-23 10:04:39.902 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:04:39 localhost nova_compute[280321]: 2026-02-23 10:04:39.903 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:39 localhost nova_compute[280321]: 2026-02-23 10:04:39.948 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:39 localhost nova_compute[280321]: 2026-02-23 10:04:39.949 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:40 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f47619b3-d060-43cb-beb4-45b54645fdc0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, vol_name:cephfs) < "" Feb 23 05:04:40 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f47619b3-d060-43cb-beb4-45b54645fdc0/.meta.tmp' Feb 23 05:04:40 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f47619b3-d060-43cb-beb4-45b54645fdc0/.meta.tmp' to config b'/volumes/_nogroup/f47619b3-d060-43cb-beb4-45b54645fdc0/.meta' Feb 23 05:04:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, vol_name:cephfs) < "" Feb 23 05:04:40 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f47619b3-d060-43cb-beb4-45b54645fdc0", "format": "json"}]: dispatch Feb 23 05:04:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, vol_name:cephfs) < "" Feb 23 05:04:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, vol_name:cephfs) < "" Feb 23 05:04:41 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:04:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:04:41 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:41 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice_bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:04:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 209 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 109 KiB/s wr, 10 op/s Feb 23 05:04:41 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:41 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:41 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5862b43-dbf5-4d0f-83df-5eda47d44ff1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5862b43-dbf5-4d0f-83df-5eda47d44ff1, vol_name:cephfs) < "" Feb 23 05:04:41 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c5862b43-dbf5-4d0f-83df-5eda47d44ff1/.meta.tmp' Feb 23 05:04:41 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5862b43-dbf5-4d0f-83df-5eda47d44ff1/.meta.tmp' to config b'/volumes/_nogroup/c5862b43-dbf5-4d0f-83df-5eda47d44ff1/.meta' Feb 23 05:04:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5862b43-dbf5-4d0f-83df-5eda47d44ff1, vol_name:cephfs) < "" Feb 23 05:04:41 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5862b43-dbf5-4d0f-83df-5eda47d44ff1", "format": "json"}]: dispatch Feb 23 05:04:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5862b43-dbf5-4d0f-83df-5eda47d44ff1, vol_name:cephfs) < "" Feb 23 05:04:41 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5862b43-dbf5-4d0f-83df-5eda47d44ff1, vol_name:cephfs) < "" Feb 23 05:04:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:04:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:04:42 localhost podman[325752]: 2026-02-23 10:04:42.009966141 +0000 UTC m=+0.082158514 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 05:04:42 localhost podman[325752]: 2026-02-23 10:04:42.022918996 +0000 UTC m=+0.095111379 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 05:04:42 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:04:42 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:42 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:42 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:42 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:42 localhost podman[325753]: 2026-02-23 10:04:42.118357155 +0000 UTC m=+0.187682941 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 05:04:42 localhost podman[325753]: 2026-02-23 10:04:42.133935622 +0000 UTC m=+0.203261408 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, io.buildah.version=1.33.7, config_id=openstack_network_exporter, version=9.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.) Feb 23 05:04:42 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:04:42 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c2ab3a9d-9959-4847-94e0-7e993629469a", "format": "json"}]: dispatch Feb 23 05:04:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c2ab3a9d-9959-4847-94e0-7e993629469a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c2ab3a9d-9959-4847-94e0-7e993629469a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:42 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:42.667+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2ab3a9d-9959-4847-94e0-7e993629469a' of type subvolume Feb 23 05:04:42 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2ab3a9d-9959-4847-94e0-7e993629469a' of type subvolume Feb 23 05:04:42 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c2ab3a9d-9959-4847-94e0-7e993629469a", "force": true, "format": "json"}]: dispatch Feb 23 05:04:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2ab3a9d-9959-4847-94e0-7e993629469a, vol_name:cephfs) < "" Feb 23 05:04:42 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c2ab3a9d-9959-4847-94e0-7e993629469a'' moved to trashcan Feb 23 05:04:42 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2ab3a9d-9959-4847-94e0-7e993629469a, vol_name:cephfs) < "" Feb 23 05:04:42 localhost podman[241086]: time="2026-02-23T10:04:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:04:42 localhost podman[241086]: @ - - [23/Feb/2026:10:04:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:04:42 localhost podman[241086]: @ - - [23/Feb/2026:10:04:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17820 "" "Go-http-client/1.1" Feb 23 05:04:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 210 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 170 KiB/s wr, 13 op/s Feb 23 05:04:44 localhost nova_compute[280321]: 2026-02-23 10:04:44.158 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:44 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:44.160 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:04:44 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:44.161 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 05:04:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f47619b3-d060-43cb-beb4-45b54645fdc0", "auth_id": "tempest-cephx-id-550070678", "tenant_id": "15d1711403cd469e88c36db6fc4b0add", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:04:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:04:44 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:04:44 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:04:44 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID tempest-cephx-id-550070678 with tenant 15d1711403cd469e88c36db6fc4b0add Feb 23 05:04:44 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/f47619b3-d060-43cb-beb4-45b54645fdc0/84bd9b7f-b699-46ec-b8f3-fe3b99d4f740", "osd", "allow rw pool=manila_data namespace=fsvolumens_f47619b3-d060-43cb-beb4-45b54645fdc0", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:44 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/f47619b3-d060-43cb-beb4-45b54645fdc0/84bd9b7f-b699-46ec-b8f3-fe3b99d4f740", "osd", "allow rw pool=manila_data namespace=fsvolumens_f47619b3-d060-43cb-beb4-45b54645fdc0", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:04:44 localhost nova_compute[280321]: 2026-02-23 10:04:44.990 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:44 localhost nova_compute[280321]: 2026-02-23 10:04:44.993 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 210 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 120 KiB/s wr, 10 op/s Feb 23 05:04:45 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:04:45 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 23 05:04:45 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:45 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:04:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:45 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5862b43-dbf5-4d0f-83df-5eda47d44ff1", "format": "json"}]: dispatch Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5862b43-dbf5-4d0f-83df-5eda47d44ff1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5862b43-dbf5-4d0f-83df-5eda47d44ff1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:45 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:45.627+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5862b43-dbf5-4d0f-83df-5eda47d44ff1' of type subvolume Feb 23 05:04:45 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5862b43-dbf5-4d0f-83df-5eda47d44ff1' of type subvolume Feb 23 05:04:45 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5862b43-dbf5-4d0f-83df-5eda47d44ff1", "force": true, "format": "json"}]: dispatch Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5862b43-dbf5-4d0f-83df-5eda47d44ff1, vol_name:cephfs) < "" Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c5862b43-dbf5-4d0f-83df-5eda47d44ff1'' moved to trashcan Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:45 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5862b43-dbf5-4d0f-83df-5eda47d44ff1, vol_name:cephfs) < "" Feb 23 05:04:45 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:04:45 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/f47619b3-d060-43cb-beb4-45b54645fdc0/84bd9b7f-b699-46ec-b8f3-fe3b99d4f740", "osd", "allow rw pool=manila_data namespace=fsvolumens_f47619b3-d060-43cb-beb4-45b54645fdc0", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:45 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/f47619b3-d060-43cb-beb4-45b54645fdc0/84bd9b7f-b699-46ec-b8f3-fe3b99d4f740", "osd", "allow rw pool=manila_data namespace=fsvolumens_f47619b3-d060-43cb-beb4-45b54645fdc0", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:45 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/f47619b3-d060-43cb-beb4-45b54645fdc0/84bd9b7f-b699-46ec-b8f3-fe3b99d4f740", "osd", "allow rw pool=manila_data namespace=fsvolumens_f47619b3-d060-43cb-beb4-45b54645fdc0", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:45 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:04:45 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:04:45 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:04:45 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 23 05:04:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:04:46 localhost podman[325794]: 2026-02-23 10:04:46.008482752 +0000 UTC m=+0.083478464 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 23 05:04:46 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5f193bc0-59f1-4f26-8a9d-5945320ef049", "snap_name": "1ff3f892-8978-436c-bef2-ded3032f9484_cf854440-8378-445e-9ea8-f90ba2766e24", "force": true, "format": "json"}]: dispatch Feb 23 05:04:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1ff3f892-8978-436c-bef2-ded3032f9484_cf854440-8378-445e-9ea8-f90ba2766e24, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:46 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f193bc0-59f1-4f26-8a9d-5945320ef049/.meta.tmp' Feb 23 05:04:46 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f193bc0-59f1-4f26-8a9d-5945320ef049/.meta.tmp' to config b'/volumes/_nogroup/5f193bc0-59f1-4f26-8a9d-5945320ef049/.meta' Feb 23 05:04:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1ff3f892-8978-436c-bef2-ded3032f9484_cf854440-8378-445e-9ea8-f90ba2766e24, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:46 localhost podman[325794]: 2026-02-23 10:04:46.094908825 +0000 UTC m=+0.169904527 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 23 05:04:46 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5f193bc0-59f1-4f26-8a9d-5945320ef049", "snap_name": "1ff3f892-8978-436c-bef2-ded3032f9484", "force": true, "format": "json"}]: dispatch Feb 23 05:04:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1ff3f892-8978-436c-bef2-ded3032f9484, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:46 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:04:46 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f193bc0-59f1-4f26-8a9d-5945320ef049/.meta.tmp' Feb 23 05:04:46 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f193bc0-59f1-4f26-8a9d-5945320ef049/.meta.tmp' to config b'/volumes/_nogroup/5f193bc0-59f1-4f26-8a9d-5945320ef049/.meta' Feb 23 05:04:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1ff3f892-8978-436c-bef2-ded3032f9484, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 183 KiB/s wr, 15 op/s Feb 23 05:04:47 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f47619b3-d060-43cb-beb4-45b54645fdc0", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:04:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, vol_name:cephfs) < "" Feb 23 05:04:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:04:47 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:04:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} v 0) Feb 23 05:04:47 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:04:47 localhost nova_compute[280321]: 2026-02-23 10:04:47.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:04:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:48.320 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:04:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:48.320 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:04:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:48.320 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:04:48 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:04:48 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:04:48 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:04:48 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"}]': finished Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, vol_name:cephfs) < "" Feb 23 05:04:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f47619b3-d060-43cb-beb4-45b54645fdc0", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, vol_name:cephfs) < "" Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-550070678, client_metadata.root=/volumes/_nogroup/f47619b3-d060-43cb-beb4-45b54645fdc0/84bd9b7f-b699-46ec-b8f3-fe3b99d4f740 Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, vol_name:cephfs) < "" Feb 23 05:04:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:04:48 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:48 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:04:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:48 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f47619b3-d060-43cb-beb4-45b54645fdc0", "format": "json"}]: dispatch Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f47619b3-d060-43cb-beb4-45b54645fdc0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f47619b3-d060-43cb-beb4-45b54645fdc0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:48 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:48.910+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f47619b3-d060-43cb-beb4-45b54645fdc0' of type subvolume Feb 23 05:04:48 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f47619b3-d060-43cb-beb4-45b54645fdc0' of type subvolume Feb 23 05:04:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f47619b3-d060-43cb-beb4-45b54645fdc0", "force": true, "format": "json"}]: dispatch Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, vol_name:cephfs) < "" Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f47619b3-d060-43cb-beb4-45b54645fdc0'' moved to trashcan Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f47619b3-d060-43cb-beb4-45b54645fdc0, vol_name:cephfs) < "" Feb 23 05:04:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4b358cf8-a902-49d4-80e3-13df36b9098c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4b358cf8-a902-49d4-80e3-13df36b9098c, vol_name:cephfs) < "" Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4b358cf8-a902-49d4-80e3-13df36b9098c/.meta.tmp' Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4b358cf8-a902-49d4-80e3-13df36b9098c/.meta.tmp' to config b'/volumes/_nogroup/4b358cf8-a902-49d4-80e3-13df36b9098c/.meta' Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4b358cf8-a902-49d4-80e3-13df36b9098c, vol_name:cephfs) < "" Feb 23 05:04:49 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4b358cf8-a902-49d4-80e3-13df36b9098c", "format": "json"}]: dispatch Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4b358cf8-a902-49d4-80e3-13df36b9098c, vol_name:cephfs) < "" Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4b358cf8-a902-49d4-80e3-13df36b9098c, vol_name:cephfs) < "" Feb 23 05:04:49 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5f193bc0-59f1-4f26-8a9d-5945320ef049", "format": "json"}]: dispatch Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:49 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:49.290+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5f193bc0-59f1-4f26-8a9d-5945320ef049' of type subvolume Feb 23 05:04:49 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5f193bc0-59f1-4f26-8a9d-5945320ef049' of type subvolume Feb 23 05:04:49 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5f193bc0-59f1-4f26-8a9d-5945320ef049", "force": true, "format": "json"}]: dispatch Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5f193bc0-59f1-4f26-8a9d-5945320ef049'' moved to trashcan Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:49 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5f193bc0-59f1-4f26-8a9d-5945320ef049, vol_name:cephfs) < "" Feb 23 05:04:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 125 KiB/s wr, 10 op/s Feb 23 05:04:49 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:49 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:49 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:49 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:50 localhost nova_compute[280321]: 2026-02-23 10:04:50.033 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:50 localhost nova_compute[280321]: 2026-02-23 10:04:50.035 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:50 localhost nova_compute[280321]: 2026-02-23 10:04:50.035 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5041 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:04:50 localhost nova_compute[280321]: 2026-02-23 10:04:50.035 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:50 localhost nova_compute[280321]: 2026-02-23 10:04:50.036 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:50 localhost nova_compute[280321]: 2026-02-23 10:04:50.037 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "11555dd1-63b1-44b4-8930-21367a0b0414", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:04:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, vol_name:cephfs) < "" Feb 23 05:04:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/11555dd1-63b1-44b4-8930-21367a0b0414/.meta.tmp' Feb 23 05:04:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e244 e244: 6 total, 6 up, 6 in Feb 23 05:04:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/11555dd1-63b1-44b4-8930-21367a0b0414/.meta.tmp' to config b'/volumes/_nogroup/11555dd1-63b1-44b4-8930-21367a0b0414/.meta' Feb 23 05:04:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, vol_name:cephfs) < "" Feb 23 05:04:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "11555dd1-63b1-44b4-8930-21367a0b0414", "format": "json"}]: dispatch Feb 23 05:04:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, vol_name:cephfs) < "" Feb 23 05:04:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, vol_name:cephfs) < "" Feb 23 05:04:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:04:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:04:51 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 23 05:04:51 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:51 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:51.163 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 05:04:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:04:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:04:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 150 KiB/s wr, 13 op/s Feb 23 05:04:51 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:51 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:51 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:51 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 23 05:04:52 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4b358cf8-a902-49d4-80e3-13df36b9098c", "format": "json"}]: dispatch Feb 23 05:04:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4b358cf8-a902-49d4-80e3-13df36b9098c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4b358cf8-a902-49d4-80e3-13df36b9098c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:52 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4b358cf8-a902-49d4-80e3-13df36b9098c' of type subvolume Feb 23 05:04:52 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:52.345+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4b358cf8-a902-49d4-80e3-13df36b9098c' of type subvolume Feb 23 05:04:52 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4b358cf8-a902-49d4-80e3-13df36b9098c", "force": true, "format": "json"}]: dispatch Feb 23 05:04:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4b358cf8-a902-49d4-80e3-13df36b9098c, vol_name:cephfs) < "" Feb 23 05:04:52 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4b358cf8-a902-49d4-80e3-13df36b9098c'' moved to trashcan Feb 23 05:04:52 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4b358cf8-a902-49d4-80e3-13df36b9098c, vol_name:cephfs) < "" Feb 23 05:04:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 170 KiB/s wr, 14 op/s Feb 23 05:04:53 localhost sshd[325821]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:04:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:04:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:04:54 localhost podman[325823]: 2026-02-23 10:04:54.015390446 +0000 UTC m=+0.084764374 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent) Feb 23 05:04:54 localhost podman[325823]: 2026-02-23 10:04:54.047945822 +0000 UTC m=+0.117319700 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2) Feb 23 05:04:54 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:04:54 localhost podman[325824]: 2026-02-23 10:04:54.064857219 +0000 UTC m=+0.131580445 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 05:04:54 localhost podman[325824]: 2026-02-23 10:04:54.078873357 +0000 UTC m=+0.145596593 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, config_id=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:04:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "11555dd1-63b1-44b4-8930-21367a0b0414", "auth_id": "tempest-cephx-id-550070678", "tenant_id": "15d1711403cd469e88c36db6fc4b0add", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:04:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:04:54 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:04:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:04:54 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:04:54 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID tempest-cephx-id-550070678 with tenant 15d1711403cd469e88c36db6fc4b0add Feb 23 05:04:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/11555dd1-63b1-44b4-8930-21367a0b0414/72e5f463-1431-499e-98f9-e7d184066592", "osd", "allow rw pool=manila_data namespace=fsvolumens_11555dd1-63b1-44b4-8930-21367a0b0414", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:54 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/11555dd1-63b1-44b4-8930-21367a0b0414/72e5f463-1431-499e-98f9-e7d184066592", "osd", "allow rw pool=manila_data namespace=fsvolumens_11555dd1-63b1-44b4-8930-21367a0b0414", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:54 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:04:54 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/11555dd1-63b1-44b4-8930-21367a0b0414/72e5f463-1431-499e-98f9-e7d184066592", "osd", "allow rw pool=manila_data namespace=fsvolumens_11555dd1-63b1-44b4-8930-21367a0b0414", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:54 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/11555dd1-63b1-44b4-8930-21367a0b0414/72e5f463-1431-499e-98f9-e7d184066592", "osd", "allow rw pool=manila_data namespace=fsvolumens_11555dd1-63b1-44b4-8930-21367a0b0414", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:54 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/11555dd1-63b1-44b4-8930-21367a0b0414/72e5f463-1431-499e-98f9-e7d184066592", "osd", "allow rw pool=manila_data namespace=fsvolumens_11555dd1-63b1-44b4-8930-21367a0b0414", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:04:54 localhost ovn_controller[155966]: 2026-02-23T10:04:54Z|00405|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory Feb 23 05:04:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:04:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:04:54 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:54 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:04:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:04:54 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:04:55 localhost nova_compute[280321]: 2026-02-23 10:04:55.038 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:55 localhost nova_compute[280321]: 2026-02-23 10:04:55.040 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:04:55 localhost nova_compute[280321]: 2026-02-23 10:04:55.040 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:04:55 localhost nova_compute[280321]: 2026-02-23 10:04:55.040 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:55 localhost nova_compute[280321]: 2026-02-23 10:04:55.067 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:55 localhost nova_compute[280321]: 2026-02-23 10:04:55.068 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:04:55 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:55 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:55 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:04:55 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:04:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 170 KiB/s wr, 14 op/s Feb 23 05:04:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:04:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:04:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 164 KiB/s wr, 14 op/s Feb 23 05:04:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "11555dd1-63b1-44b4-8930-21367a0b0414", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:04:57 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:04:57 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} v 0) Feb 23 05:04:57 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "11555dd1-63b1-44b4-8930-21367a0b0414", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-550070678, client_metadata.root=/volumes/_nogroup/11555dd1-63b1-44b4-8930-21367a0b0414/72e5f463-1431-499e-98f9-e7d184066592 Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "11555dd1-63b1-44b4-8930-21367a0b0414", "format": "json"}]: dispatch Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:11555dd1-63b1-44b4-8930-21367a0b0414, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:11555dd1-63b1-44b4-8930-21367a0b0414, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:04:57.730+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '11555dd1-63b1-44b4-8930-21367a0b0414' of type subvolume Feb 23 05:04:57 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '11555dd1-63b1-44b4-8930-21367a0b0414' of type subvolume Feb 23 05:04:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "11555dd1-63b1-44b4-8930-21367a0b0414", "force": true, "format": "json"}]: dispatch Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/11555dd1-63b1-44b4-8930-21367a0b0414'' moved to trashcan Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:11555dd1-63b1-44b4-8930-21367a0b0414, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3. Feb 23 05:04:57 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:04:57 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:57 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 23 05:04:57 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:57 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e245 e245: 6 total, 6 up, 6 in Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:04:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:04:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:04:58 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:04:58.153 263679 INFO neutron.agent.linux.ip_lib [None req-522fd90e-8267-4455-8014-bee4849218fb - - - - - -] Device tap4578602e-78 cannot be used as it has no MAC address#033[00m Feb 23 05:04:58 localhost nova_compute[280321]: 2026-02-23 10:04:58.170 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:58 localhost kernel: device tap4578602e-78 entered promiscuous mode Feb 23 05:04:58 localhost ovn_controller[155966]: 2026-02-23T10:04:58Z|00406|binding|INFO|Claiming lport 4578602e-78ba-4d20-888b-235c333a44a2 for this chassis. Feb 23 05:04:58 localhost nova_compute[280321]: 2026-02-23 10:04:58.177 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:58 localhost ovn_controller[155966]: 2026-02-23T10:04:58Z|00407|binding|INFO|4578602e-78ba-4d20-888b-235c333a44a2: Claiming unknown Feb 23 05:04:58 localhost NetworkManager[5987]: [1771841098.1777] manager: (tap4578602e-78): new Generic device (/org/freedesktop/NetworkManager/Devices/70) Feb 23 05:04:58 localhost systemd-udevd[325874]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:04:58 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:58.191 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1738c12aa55f4b22a7dbf47ada3be0e7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c40d5894-94a9-4539-89ed-c488c9339ed5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4578602e-78ba-4d20-888b-235c333a44a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:04:58 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:58.193 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 4578602e-78ba-4d20-888b-235c333a44a2 in datapath 4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb bound to our chassis#033[00m Feb 23 05:04:58 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:58.195 161842 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 23 05:04:58 localhost ovn_metadata_agent[161837]: 2026-02-23 10:04:58.195 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[567b83db-5de8-4bdc-8e37-21e5667509cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:04:58 localhost journal[229268]: ethtool ioctl error on tap4578602e-78: No such device Feb 23 05:04:58 localhost journal[229268]: ethtool ioctl error on tap4578602e-78: No such device Feb 23 05:04:58 localhost ovn_controller[155966]: 2026-02-23T10:04:58Z|00408|binding|INFO|Setting lport 4578602e-78ba-4d20-888b-235c333a44a2 ovn-installed in OVS Feb 23 05:04:58 localhost ovn_controller[155966]: 2026-02-23T10:04:58Z|00409|binding|INFO|Setting lport 4578602e-78ba-4d20-888b-235c333a44a2 up in Southbound Feb 23 05:04:58 localhost nova_compute[280321]: 2026-02-23 10:04:58.210 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:58 localhost journal[229268]: ethtool ioctl error on tap4578602e-78: No such device Feb 23 05:04:58 localhost journal[229268]: ethtool ioctl error on tap4578602e-78: No such device Feb 23 05:04:58 localhost journal[229268]: ethtool ioctl error on tap4578602e-78: No such device Feb 23 05:04:58 localhost journal[229268]: ethtool ioctl error on tap4578602e-78: No such device Feb 23 05:04:58 localhost journal[229268]: ethtool ioctl error on tap4578602e-78: No such device Feb 23 05:04:58 localhost journal[229268]: ethtool ioctl error on tap4578602e-78: No such device Feb 23 05:04:58 localhost nova_compute[280321]: 2026-02-23 10:04:58.242 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:58 localhost nova_compute[280321]: 2026-02-23 10:04:58.261 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:04:58 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:04:58 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:04:58 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:04:58 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"}]': finished Feb 23 05:04:58 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:04:58 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:58 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:04:58 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 23 05:04:59 localhost podman[325945]: Feb 23 05:04:59 localhost podman[325945]: 2026-02-23 10:04:59.112978822 +0000 UTC m=+0.093657676 container create 17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 23 05:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:04:59 localhost systemd[1]: Started libpod-conmon-17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff.scope. Feb 23 05:04:59 localhost podman[325945]: 2026-02-23 10:04:59.066620523 +0000 UTC m=+0.047299437 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:04:59 localhost systemd[1]: Started libcrun container. Feb 23 05:04:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aafd8a9baf6513d8a3b9ce70ad0236cf79c1c691eb20b03eca1f4988f02dfecd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:04:59 localhost podman[325945]: 2026-02-23 10:04:59.201144358 +0000 UTC m=+0.181823212 container init 17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS) Feb 23 05:04:59 localhost podman[325945]: 2026-02-23 10:04:59.21069394 +0000 UTC m=+0.191372794 container start 17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216) Feb 23 05:04:59 localhost dnsmasq[325972]: started, version 2.85 cachesize 150 Feb 23 05:04:59 localhost dnsmasq[325972]: DNS service limited to local subnets Feb 23 05:04:59 localhost dnsmasq[325972]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:04:59 localhost dnsmasq[325972]: warning: no upstream servers configured Feb 23 05:04:59 localhost dnsmasq-dhcp[325972]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:04:59 localhost dnsmasq[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/addn_hosts - 0 addresses Feb 23 05:04:59 localhost dnsmasq-dhcp[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/host Feb 23 05:04:59 localhost dnsmasq-dhcp[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/opts Feb 23 05:04:59 localhost podman[325959]: 2026-02-23 10:04:59.291961406 +0000 UTC m=+0.132278347 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 05:04:59 localhost podman[325959]: 2026-02-23 10:04:59.330632959 +0000 UTC m=+0.170949910 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 05:04:59 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:04:59.340 263679 INFO neutron.agent.dhcp.agent [None req-d9de9e1e-b8d0-4d2f-95fb-3807ca18701f - - - - - -] DHCP configuration for ports {'735eefbd-1fd4-42b3-9584-3d31e00bd3db'} is completed#033[00m Feb 23 05:04:59 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:04:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 965 B/s rd, 194 KiB/s wr, 17 op/s Feb 23 05:05:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:00.028 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:04:59Z, description=, device_id=003a0537-1e67-4904-bdc1-c9d4f9d3916e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d29d223a-57bd-482c-9623-cf36cb422389, ip_allocation=immediate, mac_address=fa:16:3e:64:fd:97, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:04:56Z, description=, dns_domain=, id=4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-311219721-network, port_security_enabled=True, project_id=1738c12aa55f4b22a7dbf47ada3be0e7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30656, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3682, status=ACTIVE, subnets=['7d3f0293-821e-4732-8842-024b9a32689f'], tags=[], tenant_id=1738c12aa55f4b22a7dbf47ada3be0e7, updated_at=2026-02-23T10:04:57Z, vlan_transparent=None, network_id=4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, port_security_enabled=False, project_id=1738c12aa55f4b22a7dbf47ada3be0e7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3699, status=DOWN, tags=[], tenant_id=1738c12aa55f4b22a7dbf47ada3be0e7, updated_at=2026-02-23T10:04:59Z on network 4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb#033[00m Feb 23 05:05:00 localhost nova_compute[280321]: 2026-02-23 10:05:00.070 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:00 localhost dnsmasq[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/addn_hosts - 1 addresses Feb 23 05:05:00 localhost dnsmasq-dhcp[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/host Feb 23 05:05:00 localhost dnsmasq-dhcp[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/opts Feb 23 05:05:00 localhost podman[326006]: 2026-02-23 10:05:00.228566921 +0000 UTC m=+0.057331924 container kill 17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:05:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:05:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, vol_name:cephfs) < "" Feb 23 05:05:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:00.500 263679 INFO neutron.agent.dhcp.agent [None req-fa749911-d0c1-4560-9260-26a765274e3b - - - - - -] DHCP configuration for ports {'d29d223a-57bd-482c-9623-cf36cb422389'} is completed#033[00m Feb 23 05:05:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc/.meta.tmp' Feb 23 05:05:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc/.meta.tmp' to config b'/volumes/_nogroup/08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc/.meta' Feb 23 05:05:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, vol_name:cephfs) < "" Feb 23 05:05:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "format": "json"}]: dispatch Feb 23 05:05:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, vol_name:cephfs) < "" Feb 23 05:05:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, vol_name:cephfs) < "" Feb 23 05:05:00 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:00.731 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:04:59Z, description=, device_id=003a0537-1e67-4904-bdc1-c9d4f9d3916e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d29d223a-57bd-482c-9623-cf36cb422389, ip_allocation=immediate, mac_address=fa:16:3e:64:fd:97, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:04:56Z, description=, dns_domain=, id=4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-311219721-network, port_security_enabled=True, project_id=1738c12aa55f4b22a7dbf47ada3be0e7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30656, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3682, status=ACTIVE, subnets=['7d3f0293-821e-4732-8842-024b9a32689f'], tags=[], tenant_id=1738c12aa55f4b22a7dbf47ada3be0e7, updated_at=2026-02-23T10:04:57Z, vlan_transparent=None, network_id=4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, port_security_enabled=False, project_id=1738c12aa55f4b22a7dbf47ada3be0e7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3699, status=DOWN, tags=[], tenant_id=1738c12aa55f4b22a7dbf47ada3be0e7, updated_at=2026-02-23T10:04:59Z on network 4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb#033[00m Feb 23 05:05:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:05:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:05:00 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:05:00 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:05:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:00 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:00 localhost dnsmasq[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/addn_hosts - 1 addresses Feb 23 05:05:00 localhost dnsmasq-dhcp[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/host Feb 23 05:05:00 localhost podman[326043]: 2026-02-23 10:05:00.961981422 +0000 UTC m=+0.062692778 container kill 17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:05:00 localhost dnsmasq-dhcp[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/opts Feb 23 05:05:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:01 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:01.141 263679 INFO neutron.agent.dhcp.agent [None req-a56644d0-b3d7-465b-b7ac-70f0a81684ff - - - - - -] DHCP configuration for ports {'d29d223a-57bd-482c-9623-cf36cb422389'} is completed#033[00m Feb 23 05:05:01 localhost ovn_controller[155966]: 2026-02-23T10:05:01Z|00410|ovn_bfd|INFO|Enabled BFD on interface ovn-5b0126-0 Feb 23 05:05:01 localhost ovn_controller[155966]: 2026-02-23T10:05:01Z|00411|ovn_bfd|INFO|Enabled BFD on interface ovn-585d62-0 Feb 23 05:05:01 localhost ovn_controller[155966]: 2026-02-23T10:05:01Z|00412|ovn_bfd|INFO|Enabled BFD on interface ovn-b9c72d-0 Feb 23 05:05:01 localhost nova_compute[280321]: 2026-02-23 10:05:01.334 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:01 localhost nova_compute[280321]: 2026-02-23 10:05:01.337 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:01 localhost nova_compute[280321]: 2026-02-23 10:05:01.339 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:01 localhost nova_compute[280321]: 2026-02-23 10:05:01.340 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 243 KiB/s wr, 18 op/s Feb 23 05:05:01 localhost nova_compute[280321]: 2026-02-23 10:05:01.397 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:01 localhost nova_compute[280321]: 2026-02-23 10:05:01.397 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:01 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:05:01 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:01 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:01 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:01 localhost openstack_network_exporter[243519]: ERROR 10:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:05:01 localhost openstack_network_exporter[243519]: Feb 23 05:05:01 localhost openstack_network_exporter[243519]: ERROR 10:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:05:01 localhost openstack_network_exporter[243519]: Feb 23 05:05:02 localhost nova_compute[280321]: 2026-02-23 10:05:02.209 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:02 localhost nova_compute[280321]: 2026-02-23 10:05:02.306 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:03 localhost nova_compute[280321]: 2026-02-23 10:05:03.155 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 150 KiB/s wr, 13 op/s Feb 23 05:05:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "auth_id": "tempest-cephx-id-550070678", "tenant_id": "15d1711403cd469e88c36db6fc4b0add", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:05:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:05:03 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:05:03 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:03 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID tempest-cephx-id-550070678 with tenant 15d1711403cd469e88c36db6fc4b0add Feb 23 05:05:03 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc/3090d14f-d090-4924-bc2f-5b879acc23d8", "osd", "allow rw pool=manila_data namespace=fsvolumens_08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:03 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc/3090d14f-d090-4924-bc2f-5b879acc23d8", "osd", "allow rw pool=manila_data namespace=fsvolumens_08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:05:03 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:03 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc/3090d14f-d090-4924-bc2f-5b879acc23d8", "osd", "allow rw pool=manila_data namespace=fsvolumens_08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:03 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc/3090d14f-d090-4924-bc2f-5b879acc23d8", "osd", "allow rw pool=manila_data namespace=fsvolumens_08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:03 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc/3090d14f-d090-4924-bc2f-5b879acc23d8", "osd", "allow rw pool=manila_data namespace=fsvolumens_08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:05:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:05:04 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:05:04 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 23 05:05:04 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:05:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:05:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:04 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:05:04 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:04 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:04.910 263679 INFO neutron.agent.linux.ip_lib [None req-4026c2cb-5f96-4918-a99e-c990bc444b83 - - - - - -] Device tapd56ba65d-2b cannot be used as it has no MAC address#033[00m Feb 23 05:05:04 localhost nova_compute[280321]: 2026-02-23 10:05:04.931 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:04 localhost kernel: device tapd56ba65d-2b entered promiscuous mode Feb 23 05:05:04 localhost nova_compute[280321]: 2026-02-23 10:05:04.939 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:04 localhost ovn_controller[155966]: 2026-02-23T10:05:04Z|00413|binding|INFO|Claiming lport d56ba65d-2b1b-43f4-a1a8-5e0908deca96 for this chassis. Feb 23 05:05:04 localhost ovn_controller[155966]: 2026-02-23T10:05:04Z|00414|binding|INFO|d56ba65d-2b1b-43f4-a1a8-5e0908deca96: Claiming unknown Feb 23 05:05:04 localhost NetworkManager[5987]: [1771841104.9417] manager: (tapd56ba65d-2b): new Generic device (/org/freedesktop/NetworkManager/Devices/71) Feb 23 05:05:04 localhost systemd-udevd[326079]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:05:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:04.950 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '302db78508a144bba1f0c936c7bc3750', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aebff21e-9abf-4615-8059-969157d37341, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d56ba65d-2b1b-43f4-a1a8-5e0908deca96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:05:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:04.951 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d56ba65d-2b1b-43f4-a1a8-5e0908deca96 in datapath e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d bound to our chassis#033[00m Feb 23 05:05:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:04.954 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port 623eda6f-f7e2-42f1-a09a-204477c14a32 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 05:05:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:04.954 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:05:04 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:04.955 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[8864b9de-e144-4792-9796-4321f5139058]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:05:04 localhost ovn_controller[155966]: 2026-02-23T10:05:04Z|00415|binding|INFO|Setting lport d56ba65d-2b1b-43f4-a1a8-5e0908deca96 ovn-installed in OVS Feb 23 05:05:04 localhost ovn_controller[155966]: 2026-02-23T10:05:04Z|00416|binding|INFO|Setting lport d56ba65d-2b1b-43f4-a1a8-5e0908deca96 up in Southbound Feb 23 05:05:04 localhost nova_compute[280321]: 2026-02-23 10:05:04.986 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:04 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:05:04 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:05:04 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:05:04 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 23 05:05:05 localhost nova_compute[280321]: 2026-02-23 10:05:05.018 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:05 localhost nova_compute[280321]: 2026-02-23 10:05:05.039 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:05 localhost nova_compute[280321]: 2026-02-23 10:05:05.072 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_10:05:05 Feb 23 05:05:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 05:05:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 05:05:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['images', 'volumes', 'manila_metadata', 'vms', 'manila_data', 'backups', '.mgr'] Feb 23 05:05:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 05:05:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:05:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:05:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:05:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:05:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:05:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:05:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 150 KiB/s wr, 13 op/s Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014869268216080402 of space, bias 1.0, pg target 0.2968897220477387 quantized to 32 (current 32) Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.4540294062907128e-06 of space, bias 1.0, pg target 0.0002893518518518519 quantized to 32 (current 32) Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:05:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0014768394926018985 of space, bias 4.0, pg target 1.1755642361111112 quantized to 16 (current 16) Feb 23 05:05:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 05:05:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:05:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 05:05:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:05:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:05:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:05:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:05:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:05:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:05:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:05:06 localhost podman[326134]: Feb 23 05:05:06 localhost podman[326134]: 2026-02-23 10:05:06.269671704 +0000 UTC m=+0.085370823 container create 7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216) Feb 23 05:05:06 localhost systemd[1]: Started libpod-conmon-7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13.scope. Feb 23 05:05:06 localhost systemd[1]: tmp-crun.dzXzcR.mount: Deactivated successfully. Feb 23 05:05:06 localhost systemd[1]: Started libcrun container. Feb 23 05:05:06 localhost podman[326134]: 2026-02-23 10:05:06.227113181 +0000 UTC m=+0.042812330 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:05:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47aa24f900d09e2b9c628e45492211678a9fbf6a00da8cd851c7651de15c321/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:05:06 localhost podman[326134]: 2026-02-23 10:05:06.33528124 +0000 UTC m=+0.150980359 container init 7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 05:05:06 localhost podman[326134]: 2026-02-23 10:05:06.344088489 +0000 UTC m=+0.159787608 container start 7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_managed=true) Feb 23 05:05:06 localhost dnsmasq[326152]: started, version 2.85 cachesize 150 Feb 23 05:05:06 localhost dnsmasq[326152]: DNS service limited to local subnets Feb 23 05:05:06 localhost dnsmasq[326152]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:05:06 localhost dnsmasq[326152]: warning: no upstream servers configured Feb 23 05:05:06 localhost dnsmasq-dhcp[326152]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:05:06 localhost dnsmasq[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/addn_hosts - 0 addresses Feb 23 05:05:06 localhost dnsmasq-dhcp[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/host Feb 23 05:05:06 localhost dnsmasq-dhcp[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/opts Feb 23 05:05:06 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:06.549 263679 INFO neutron.agent.dhcp.agent [None req-be385888-5710-408d-abcb-48660d54f112 - - - - - -] DHCP configuration for ports {'6e26c9f4-208e-44e7-bdb7-f844e0287f1c'} is completed#033[00m Feb 23 05:05:06 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:06.928 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:05:06Z, description=, device_id=9a677207-db32-4b9e-abaa-4a936b2ee47c, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d6d03809-37cb-4be8-a4b1-56999c64e9b7, ip_allocation=immediate, mac_address=fa:16:3e:4d:d0:2f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:05:02Z, description=, dns_domain=, id=e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1364738551-network, port_security_enabled=True, project_id=302db78508a144bba1f0c936c7bc3750, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=49390, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3707, status=ACTIVE, subnets=['6a0e74e0-7ba9-406a-9d7d-a285d490cf2a'], tags=[], tenant_id=302db78508a144bba1f0c936c7bc3750, updated_at=2026-02-23T10:05:03Z, vlan_transparent=None, network_id=e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, port_security_enabled=False, project_id=302db78508a144bba1f0c936c7bc3750, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3728, status=DOWN, tags=[], tenant_id=302db78508a144bba1f0c936c7bc3750, updated_at=2026-02-23T10:05:06Z on network e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d#033[00m Feb 23 05:05:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, vol_name:cephfs) < "" Feb 23 05:05:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:05:07 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} v 0) Feb 23 05:05:07 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:07 localhost dnsmasq[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/addn_hosts - 1 addresses Feb 23 05:05:07 localhost dnsmasq-dhcp[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/host Feb 23 05:05:07 localhost dnsmasq-dhcp[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/opts Feb 23 05:05:07 localhost podman[326168]: 2026-02-23 10:05:07.13109209 +0000 UTC m=+0.055725835 container kill 7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, vol_name:cephfs) < "" Feb 23 05:05:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, vol_name:cephfs) < "" Feb 23 05:05:07 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:07 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:07 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:07 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"}]': finished Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-550070678, client_metadata.root=/volumes/_nogroup/08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc/3090d14f-d090-4924-bc2f-5b879acc23d8 Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, vol_name:cephfs) < "" Feb 23 05:05:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "format": "json"}]: dispatch Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:05:07 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:05:07.330+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc' of type subvolume Feb 23 05:05:07 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc' of type subvolume Feb 23 05:05:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc", "force": true, "format": "json"}]: dispatch Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, vol_name:cephfs) < "" Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc'' moved to trashcan Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:08fb1b39-07a9-4d7d-bcb4-cece4d84a3fc, vol_name:cephfs) < "" Feb 23 05:05:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 128 KiB/s wr, 11 op/s Feb 23 05:05:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:07.392 263679 INFO neutron.agent.dhcp.agent [None req-980a064c-56d2-47ca-b631-89edcf05932b - - - - - -] DHCP configuration for ports {'d6d03809-37cb-4be8-a4b1-56999c64e9b7'} is completed#033[00m Feb 23 05:05:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:05:07 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:05:07 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:05:07 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:07 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:07 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:07.621 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:05:06Z, description=, device_id=9a677207-db32-4b9e-abaa-4a936b2ee47c, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d6d03809-37cb-4be8-a4b1-56999c64e9b7, ip_allocation=immediate, mac_address=fa:16:3e:4d:d0:2f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:05:02Z, description=, dns_domain=, id=e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1364738551-network, port_security_enabled=True, project_id=302db78508a144bba1f0c936c7bc3750, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=49390, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3707, status=ACTIVE, subnets=['6a0e74e0-7ba9-406a-9d7d-a285d490cf2a'], tags=[], tenant_id=302db78508a144bba1f0c936c7bc3750, updated_at=2026-02-23T10:05:03Z, vlan_transparent=None, network_id=e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, port_security_enabled=False, project_id=302db78508a144bba1f0c936c7bc3750, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3728, status=DOWN, tags=[], tenant_id=302db78508a144bba1f0c936c7bc3750, updated_at=2026-02-23T10:05:06Z on network e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d#033[00m Feb 23 05:05:07 localhost dnsmasq[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/addn_hosts - 1 addresses Feb 23 05:05:07 localhost dnsmasq-dhcp[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/host Feb 23 05:05:07 localhost dnsmasq-dhcp[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/opts Feb 23 05:05:07 localhost podman[326207]: 2026-02-23 10:05:07.810914422 +0000 UTC m=+0.043898104 container kill 7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:05:07 localhost systemd[1]: tmp-crun.NU3Qpb.mount: Deactivated successfully. Feb 23 05:05:08 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:08.036 263679 INFO neutron.agent.dhcp.agent [None req-b483210d-bbab-417b-8743-f49151c1a735 - - - - - -] DHCP configuration for ports {'d6d03809-37cb-4be8-a4b1-56999c64e9b7'} is completed#033[00m Feb 23 05:05:08 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:05:08 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:08 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:08 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:08 localhost nova_compute[280321]: 2026-02-23 10:05:08.303 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 269 B/s rd, 113 KiB/s wr, 9 op/s Feb 23 05:05:10 localhost nova_compute[280321]: 2026-02-23 10:05:10.078 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:10 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "tenant_id": "15d1711403cd469e88c36db6fc4b0add", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:05:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:05:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:05:10 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:10 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID tempest-cephx-id-550070678 with tenant 15d1711403cd469e88c36db6fc4b0add Feb 23 05:05:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:10 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:10 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:05:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:10 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:05:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 23 05:05:10 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:05:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 23 05:05:10 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:05:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:10 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice", "format": "json"}]: dispatch Feb 23 05:05:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:05:10 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:11 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:11 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:11 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:11 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 23 05:05:11 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:05:11 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 23 05:05:11 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 23 05:05:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 180 KiB/s wr, 16 op/s Feb 23 05:05:11 localhost dnsmasq[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/addn_hosts - 0 addresses Feb 23 05:05:11 localhost dnsmasq-dhcp[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/host Feb 23 05:05:11 localhost podman[326247]: 2026-02-23 10:05:11.428619977 +0000 UTC m=+0.065015410 container kill 17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS) Feb 23 05:05:11 localhost dnsmasq-dhcp[325972]: read /var/lib/neutron/dhcp/4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb/opts Feb 23 05:05:11 localhost nova_compute[280321]: 2026-02-23 10:05:11.610 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:11 localhost kernel: device tap4578602e-78 left promiscuous mode Feb 23 05:05:11 localhost ovn_controller[155966]: 2026-02-23T10:05:11Z|00417|binding|INFO|Releasing lport 4578602e-78ba-4d20-888b-235c333a44a2 from this chassis (sb_readonly=0) Feb 23 05:05:11 localhost ovn_controller[155966]: 2026-02-23T10:05:11Z|00418|binding|INFO|Setting lport 4578602e-78ba-4d20-888b-235c333a44a2 down in Southbound Feb 23 05:05:11 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:11.621 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1738c12aa55f4b22a7dbf47ada3be0e7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c40d5894-94a9-4539-89ed-c488c9339ed5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4578602e-78ba-4d20-888b-235c333a44a2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:05:11 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:11.623 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 4578602e-78ba-4d20-888b-235c333a44a2 in datapath 4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb unbound from our chassis#033[00m Feb 23 05:05:11 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:11.626 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:05:11 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:11.627 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[bddc3683-c455-4d91-a0ac-cc3ec2e6efd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:05:11 localhost nova_compute[280321]: 2026-02-23 10:05:11.635 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:12 localhost nova_compute[280321]: 2026-02-23 10:05:12.572 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:12 localhost podman[241086]: time="2026-02-23T10:05:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:05:12 localhost podman[241086]: @ - - [23/Feb/2026:10:05:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157717 "" "Go-http-client/1.1" Feb 23 05:05:12 localhost podman[241086]: @ - - [23/Feb/2026:10:05:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18762 "" "Go-http-client/1.1" Feb 23 05:05:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:05:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:05:13 localhost podman[326279]: 2026-02-23 10:05:13.020833543 +0000 UTC m=+0.093610004 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 05:05:13 localhost podman[326279]: 2026-02-23 10:05:13.0367562 +0000 UTC m=+0.109532641 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 05:05:13 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:05:13 localhost systemd[1]: tmp-crun.l2uIMw.mount: Deactivated successfully. Feb 23 05:05:13 localhost podman[326280]: 2026-02-23 10:05:13.12861388 +0000 UTC m=+0.198317187 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, vcs-type=git) Feb 23 05:05:13 localhost dnsmasq[325972]: exiting on receipt of SIGTERM Feb 23 05:05:13 localhost podman[326308]: 2026-02-23 10:05:13.152751108 +0000 UTC m=+0.164133341 container kill 17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:05:13 localhost systemd[1]: libpod-17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff.scope: Deactivated successfully. Feb 23 05:05:13 localhost podman[326280]: 2026-02-23 10:05:13.191559464 +0000 UTC m=+0.261262771 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, version=9.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 23 05:05:13 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:05:13 localhost podman[326345]: 2026-02-23 10:05:13.235348744 +0000 UTC m=+0.062550174 container died 17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 23 05:05:13 localhost podman[326345]: 2026-02-23 10:05:13.274986536 +0000 UTC m=+0.102187886 container remove 17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e45ae6f-cec3-479e-b4bb-26e83bd9e8fb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0) Feb 23 05:05:13 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:13.298 263679 INFO neutron.agent.dhcp.agent [None req-71be017c-5324-4cba-bd21-cfe2c80461c6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:05:13 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:13.299 263679 INFO neutron.agent.dhcp.agent [None req-71be017c-5324-4cba-bd21-cfe2c80461c6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:05:13 localhost systemd[1]: libpod-conmon-17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff.scope: Deactivated successfully. Feb 23 05:05:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 114 KiB/s wr, 10 op/s Feb 23 05:05:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:05:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:05:13 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} v 0) Feb 23 05:05:13 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:05:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-550070678, client_metadata.root=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e Feb 23 05:05:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:13 localhost ovn_controller[155966]: 2026-02-23T10:05:13Z|00419|ovn_bfd|INFO|Disabled BFD on interface ovn-5b0126-0 Feb 23 05:05:13 localhost ovn_controller[155966]: 2026-02-23T10:05:13Z|00420|ovn_bfd|INFO|Disabled BFD on interface ovn-585d62-0 Feb 23 05:05:13 localhost ovn_controller[155966]: 2026-02-23T10:05:13Z|00421|ovn_bfd|INFO|Disabled BFD on interface ovn-b9c72d-0 Feb 23 05:05:13 localhost nova_compute[280321]: 2026-02-23 10:05:13.897 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:13 localhost nova_compute[280321]: 2026-02-23 10:05:13.914 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:05:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:05:13 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:05:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice_bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:05:13 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:13 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:13 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:13 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"}]': finished Feb 23 05:05:13 localhost dnsmasq[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/addn_hosts - 0 addresses Feb 23 05:05:13 localhost dnsmasq-dhcp[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/host Feb 23 05:05:13 localhost dnsmasq-dhcp[326152]: read /var/lib/neutron/dhcp/e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d/opts Feb 23 05:05:13 localhost podman[326388]: 2026-02-23 10:05:13.998018609 +0000 UTC m=+0.064103192 container kill 7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:05:14 localhost systemd[1]: var-lib-containers-storage-overlay-aafd8a9baf6513d8a3b9ce70ad0236cf79c1c691eb20b03eca1f4988f02dfecd-merged.mount: Deactivated successfully. Feb 23 05:05:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17bd5701632ebaf86782210bf854994a2807f7161476fced2b1ac8c78deb73ff-userdata-shm.mount: Deactivated successfully. Feb 23 05:05:14 localhost systemd[1]: run-netns-qdhcp\x2d4e45ae6f\x2dcec3\x2d479e\x2db4bb\x2d26e83bd9e8fb.mount: Deactivated successfully. Feb 23 05:05:14 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:14 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:14 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:14 localhost ovn_controller[155966]: 2026-02-23T10:05:14Z|00422|binding|INFO|Releasing lport d56ba65d-2b1b-43f4-a1a8-5e0908deca96 from this chassis (sb_readonly=0) Feb 23 05:05:14 localhost ovn_controller[155966]: 2026-02-23T10:05:14Z|00423|binding|INFO|Setting lport d56ba65d-2b1b-43f4-a1a8-5e0908deca96 down in Southbound Feb 23 05:05:14 localhost nova_compute[280321]: 2026-02-23 10:05:14.151 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:14 localhost kernel: device tapd56ba65d-2b left promiscuous mode Feb 23 05:05:14 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:14.166 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '302db78508a144bba1f0c936c7bc3750', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aebff21e-9abf-4615-8059-969157d37341, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d56ba65d-2b1b-43f4-a1a8-5e0908deca96) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:05:14 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:14.168 161842 INFO neutron.agent.ovn.metadata.agent [-] Port d56ba65d-2b1b-43f4-a1a8-5e0908deca96 in datapath e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d unbound from our chassis#033[00m Feb 23 05:05:14 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:14.170 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:05:14 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:14.171 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a13142-76ef-4120-8a49-ef5fd7dec644]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:05:14 localhost nova_compute[280321]: 2026-02-23 10:05:14.186 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:14 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:05:14 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:14 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:14 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:15 localhost nova_compute[280321]: 2026-02-23 10:05:15.102 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 113 KiB/s wr, 10 op/s Feb 23 05:05:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:15 localhost dnsmasq[326152]: exiting on receipt of SIGTERM Feb 23 05:05:15 localhost podman[326428]: 2026-02-23 10:05:15.942530931 +0000 UTC m=+0.059263653 container kill 7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:05:15 localhost systemd[1]: libpod-7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13.scope: Deactivated successfully. Feb 23 05:05:15 localhost podman[326441]: 2026-02-23 10:05:15.99840067 +0000 UTC m=+0.044575314 container died 7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 23 05:05:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13-userdata-shm.mount: Deactivated successfully. Feb 23 05:05:16 localhost podman[326441]: 2026-02-23 10:05:16.027521971 +0000 UTC m=+0.073696605 container cleanup 7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0) Feb 23 05:05:16 localhost systemd[1]: libpod-conmon-7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13.scope: Deactivated successfully. Feb 23 05:05:16 localhost podman[326443]: 2026-02-23 10:05:16.095042506 +0000 UTC m=+0.133707450 container remove 7a79a167621f8b6e41fd8a3ef153a3ece07e0cca2db57f49290a653900a14e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e32eb7bb-4a5d-4969-a21a-c7ab4155bb1d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2) Feb 23 05:05:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:16.118 263679 INFO neutron.agent.dhcp.agent [None req-36eb3739-7375-4db0-acb8-efdd090a12de - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:05:16 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:16.171 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:05:16 localhost nova_compute[280321]: 2026-02-23 10:05:16.365 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:16 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "tenant_id": "15d1711403cd469e88c36db6fc4b0add", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:05:16 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:05:16 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:05:16 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:16 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID tempest-cephx-id-550070678 with tenant 15d1711403cd469e88c36db6fc4b0add Feb 23 05:05:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:05:16 localhost systemd[1]: var-lib-containers-storage-overlay-d47aa24f900d09e2b9c628e45492211678a9fbf6a00da8cd851c7651de15c321-merged.mount: Deactivated successfully. Feb 23 05:05:16 localhost systemd[1]: run-netns-qdhcp\x2de32eb7bb\x2d4a5d\x2d4969\x2da21a\x2dc7ab4155bb1d.mount: Deactivated successfully. Feb 23 05:05:16 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:16 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:16 localhost systemd[1]: tmp-crun.K4Sh5U.mount: Deactivated successfully. Feb 23 05:05:16 localhost podman[326470]: 2026-02-23 10:05:16.988619835 +0000 UTC m=+0.068663731 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller) Feb 23 05:05:17 localhost podman[326470]: 2026-02-23 10:05:17.016723455 +0000 UTC m=+0.096767381 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:05:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:05:17 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:05:17 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:17 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:17 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:17 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 163 KiB/s wr, 14 op/s Feb 23 05:05:17 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:05:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:05:17 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:05:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 23 05:05:17 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:05:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:17 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:05:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:17 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:05:17 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:18 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:05:18 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:05:18 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:05:18 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 23 05:05:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 122 KiB/s wr, 11 op/s Feb 23 05:05:20 localhost nova_compute[280321]: 2026-02-23 10:05:20.102 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:20 localhost nova_compute[280321]: 2026-02-23 10:05:20.106 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:05:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:05:20 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} v 0) Feb 23 05:05:20 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:05:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:20 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-550070678, client_metadata.root=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e Feb 23 05:05:20 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:20 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:20 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:20 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:20 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"}]': finished Feb 23 05:05:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:05:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:05:20 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:05:20 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice_bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:05:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:20 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:21 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:05:21 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:21 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:21 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 194 KiB/s wr, 16 op/s Feb 23 05:05:23 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "tenant_id": "15d1711403cd469e88c36db6fc4b0add", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:05:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:05:23 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:05:23 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:23 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID tempest-cephx-id-550070678 with tenant 15d1711403cd469e88c36db6fc4b0add Feb 23 05:05:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 122 KiB/s wr, 10 op/s Feb 23 05:05:23 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:23 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:05:23 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:23 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:23 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:23 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:23 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:05:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:24 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 23 05:05:24 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:05:24 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 23 05:05:24 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:05:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 23 05:05:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:05:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:05:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:05:24 localhost podman[326498]: 2026-02-23 10:05:24.997646375 +0000 UTC m=+0.071759907 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 05:05:25 localhost podman[326498]: 2026-02-23 10:05:25.003501313 +0000 UTC m=+0.077614825 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true) Feb 23 05:05:25 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 23 05:05:25 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:05:25 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 23 05:05:25 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 23 05:05:25 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:05:25 localhost podman[326499]: 2026-02-23 10:05:25.058465715 +0000 UTC m=+0.129407900 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:05:25 localhost podman[326499]: 2026-02-23 10:05:25.098974953 +0000 UTC m=+0.169917108 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute) Feb 23 05:05:25 localhost nova_compute[280321]: 2026-02-23 10:05:25.103 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:25 localhost nova_compute[280321]: 2026-02-23 10:05:25.107 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:25 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:05:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 122 KiB/s wr, 10 op/s Feb 23 05:05:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:05:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:05:26 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:26 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} v 0) Feb 23 05:05:26 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:05:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:26 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-550070678, client_metadata.root=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e Feb 23 05:05:26 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:27 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:27 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"}]': finished Feb 23 05:05:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:05:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:05:27 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:05:27 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:05:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 160 KiB/s wr, 14 op/s Feb 23 05:05:27 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:27 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:28 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:05:28 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:28 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:28 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:28 localhost nova_compute[280321]: 2026-02-23 10:05:28.903 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:05:28 localhost nova_compute[280321]: 2026-02-23 10:05:28.904 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 05:05:28 localhost nova_compute[280321]: 2026-02-23 10:05:28.904 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 05:05:28 localhost nova_compute[280321]: 2026-02-23 10:05:28.924 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 05:05:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 111 KiB/s wr, 10 op/s Feb 23 05:05:29 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:29.395 263679 INFO neutron.agent.linux.ip_lib [None req-b4807e38-dd71-460d-9412-e1392cebbcbe - - - - - -] Device tap67b9e7ae-b9 cannot be used as it has no MAC address#033[00m Feb 23 05:05:29 localhost nova_compute[280321]: 2026-02-23 10:05:29.416 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:29 localhost kernel: device tap67b9e7ae-b9 entered promiscuous mode Feb 23 05:05:29 localhost NetworkManager[5987]: [1771841129.4243] manager: (tap67b9e7ae-b9): new Generic device (/org/freedesktop/NetworkManager/Devices/72) Feb 23 05:05:29 localhost ovn_controller[155966]: 2026-02-23T10:05:29Z|00424|binding|INFO|Claiming lport 67b9e7ae-b98f-4e79-9933-6256f2e4ff3c for this chassis. Feb 23 05:05:29 localhost nova_compute[280321]: 2026-02-23 10:05:29.424 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:29 localhost ovn_controller[155966]: 2026-02-23T10:05:29Z|00425|binding|INFO|67b9e7ae-b98f-4e79-9933-6256f2e4ff3c: Claiming unknown Feb 23 05:05:29 localhost systemd-udevd[326545]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:05:29 localhost nova_compute[280321]: 2026-02-23 10:05:29.429 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:29 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:29.441 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-b025481d-e987-40db-8a53-4c234c9213cb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b025481d-e987-40db-8a53-4c234c9213cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f79d32a621b2444d9fac9131dda85cfa', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd69dfcd-7e68-49c9-ac78-a5dbefd27d2a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=67b9e7ae-b98f-4e79-9933-6256f2e4ff3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:05:29 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:29.442 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 67b9e7ae-b98f-4e79-9933-6256f2e4ff3c in datapath b025481d-e987-40db-8a53-4c234c9213cb bound to our chassis#033[00m Feb 23 05:05:29 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:29.446 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port cc004eeb-9037-4a23-a54f-2c2bf4f26af9 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 05:05:29 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:29.446 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b025481d-e987-40db-8a53-4c234c9213cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:05:29 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:29.447 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[2cfeff73-ff6e-40dd-a31c-3084b6329bd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:05:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:05:29 localhost journal[229268]: ethtool ioctl error on tap67b9e7ae-b9: No such device Feb 23 05:05:29 localhost ovn_controller[155966]: 2026-02-23T10:05:29Z|00426|binding|INFO|Setting lport 67b9e7ae-b98f-4e79-9933-6256f2e4ff3c ovn-installed in OVS Feb 23 05:05:29 localhost ovn_controller[155966]: 2026-02-23T10:05:29Z|00427|binding|INFO|Setting lport 67b9e7ae-b98f-4e79-9933-6256f2e4ff3c up in Southbound Feb 23 05:05:29 localhost journal[229268]: ethtool ioctl error on tap67b9e7ae-b9: No such device Feb 23 05:05:29 localhost nova_compute[280321]: 2026-02-23 10:05:29.461 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:29 localhost journal[229268]: ethtool ioctl error on tap67b9e7ae-b9: No such device Feb 23 05:05:29 localhost journal[229268]: ethtool ioctl error on tap67b9e7ae-b9: No such device Feb 23 05:05:29 localhost journal[229268]: ethtool ioctl error on tap67b9e7ae-b9: No such device Feb 23 05:05:29 localhost journal[229268]: ethtool ioctl error on tap67b9e7ae-b9: No such device Feb 23 05:05:29 localhost journal[229268]: ethtool ioctl error on tap67b9e7ae-b9: No such device Feb 23 05:05:29 localhost journal[229268]: ethtool ioctl error on tap67b9e7ae-b9: No such device Feb 23 05:05:29 localhost nova_compute[280321]: 2026-02-23 10:05:29.492 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:29 localhost nova_compute[280321]: 2026-02-23 10:05:29.516 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:29 localhost systemd[1]: tmp-crun.YB0L1k.mount: Deactivated successfully. Feb 23 05:05:29 localhost podman[326550]: 2026-02-23 10:05:29.568633763 +0000 UTC m=+0.103090813 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:05:29 localhost podman[326550]: 2026-02-23 10:05:29.584777877 +0000 UTC m=+0.119234947 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:05:29 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:05:29 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "tenant_id": "15d1711403cd469e88c36db6fc4b0add", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:05:29 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:05:29 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:05:29 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:29 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID tempest-cephx-id-550070678 with tenant 15d1711403cd469e88c36db6fc4b0add Feb 23 05:05:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:30 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume authorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, tenant_id:15d1711403cd469e88c36db6fc4b0add, vol_name:cephfs) < "" Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.105 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.108 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:30 localhost sshd[326617]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:05:30 localhost podman[326641]: Feb 23 05:05:30 localhost podman[326641]: 2026-02-23 10:05:30.414903177 +0000 UTC m=+0.095371228 container create 8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b025481d-e987-40db-8a53-4c234c9213cb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:05:30 localhost systemd[1]: Started libpod-conmon-8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2.scope. Feb 23 05:05:30 localhost systemd[1]: Started libcrun container. Feb 23 05:05:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc9e7a69627b774015c2087c0a1a52e22a298852d9dfcf7a541c1cc25fe4513/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:05:30 localhost podman[326641]: 2026-02-23 10:05:30.372383266 +0000 UTC m=+0.052851317 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:05:30 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:30 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:30 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:30 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-550070678", "caps": ["mds", "allow rw path=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e", "osd", "allow rw pool=manila_data namespace=fsvolumens_812d8099-4259-45c6-8802-8b5ec410d596", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:30 localhost podman[326641]: 2026-02-23 10:05:30.482872255 +0000 UTC m=+0.163340296 container init 8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b025481d-e987-40db-8a53-4c234c9213cb, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2) Feb 23 05:05:30 localhost podman[326641]: 2026-02-23 10:05:30.493136459 +0000 UTC m=+0.173604530 container start 8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b025481d-e987-40db-8a53-4c234c9213cb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 23 05:05:30 localhost dnsmasq[326659]: started, version 2.85 cachesize 150 Feb 23 05:05:30 localhost dnsmasq[326659]: DNS service limited to local subnets Feb 23 05:05:30 localhost dnsmasq[326659]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:05:30 localhost dnsmasq[326659]: warning: no upstream servers configured Feb 23 05:05:30 localhost dnsmasq-dhcp[326659]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:05:30 localhost dnsmasq[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/addn_hosts - 0 addresses Feb 23 05:05:30 localhost dnsmasq-dhcp[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/host Feb 23 05:05:30 localhost dnsmasq-dhcp[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/opts Feb 23 05:05:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:30 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:30.549 263679 INFO neutron.agent.dhcp.agent [None req-88e97bf8-e119-4103-ab5a-8a13c967d189 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:05:29Z, description=, device_id=bf770441-f389-4f92-aa3a-271368cb88d2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1831142c-ba32-4237-8ec6-f42419281867, ip_allocation=immediate, mac_address=fa:16:3e:95:80:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:05:26Z, description=, dns_domain=, id=b025481d-e987-40db-8a53-4c234c9213cb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1852153397-network, port_security_enabled=True, project_id=f79d32a621b2444d9fac9131dda85cfa, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15689, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3748, status=ACTIVE, subnets=['d5733e35-161c-4dbb-adc6-12dff53eda11'], tags=[], tenant_id=f79d32a621b2444d9fac9131dda85cfa, updated_at=2026-02-23T10:05:27Z, vlan_transparent=None, network_id=b025481d-e987-40db-8a53-4c234c9213cb, port_security_enabled=False, project_id=f79d32a621b2444d9fac9131dda85cfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3756, status=DOWN, tags=[], tenant_id=f79d32a621b2444d9fac9131dda85cfa, updated_at=2026-02-23T10:05:30Z on network b025481d-e987-40db-8a53-4c234c9213cb#033[00m Feb 23 05:05:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:05:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:05:30 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:05:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 23 05:05:30 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:05:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:05:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:05:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:30 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:30.698 263679 INFO neutron.agent.dhcp.agent [None req-16944d28-36fc-4665-b963-f4c8df420b21 - - - - - -] DHCP configuration for ports {'11a60c41-bb2a-4f87-8d06-a3ff65fa7c9b'} is completed#033[00m Feb 23 05:05:30 localhost dnsmasq[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/addn_hosts - 1 addresses Feb 23 05:05:30 localhost dnsmasq-dhcp[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/host Feb 23 05:05:30 localhost podman[326676]: 2026-02-23 10:05:30.754970747 +0000 UTC m=+0.058614074 container kill 8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b025481d-e987-40db-8a53-4c234c9213cb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0) Feb 23 05:05:30 localhost dnsmasq-dhcp[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/opts Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.893 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:05:30 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:30.899 263679 INFO neutron.agent.dhcp.agent [None req-de03f0e5-a117-48f7-ba0f-3b6eb3a4a9e2 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:05:29Z, description=, device_id=bf770441-f389-4f92-aa3a-271368cb88d2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1831142c-ba32-4237-8ec6-f42419281867, ip_allocation=immediate, mac_address=fa:16:3e:95:80:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:05:26Z, description=, dns_domain=, id=b025481d-e987-40db-8a53-4c234c9213cb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1852153397-network, port_security_enabled=True, project_id=f79d32a621b2444d9fac9131dda85cfa, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15689, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3748, status=ACTIVE, subnets=['d5733e35-161c-4dbb-adc6-12dff53eda11'], tags=[], tenant_id=f79d32a621b2444d9fac9131dda85cfa, updated_at=2026-02-23T10:05:27Z, vlan_transparent=None, network_id=b025481d-e987-40db-8a53-4c234c9213cb, port_security_enabled=False, project_id=f79d32a621b2444d9fac9131dda85cfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3756, status=DOWN, tags=[], tenant_id=f79d32a621b2444d9fac9131dda85cfa, updated_at=2026-02-23T10:05:30Z on network b025481d-e987-40db-8a53-4c234c9213cb#033[00m Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.925 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.925 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.925 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.926 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 05:05:30 localhost nova_compute[280321]: 2026-02-23 10:05:30.926 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:05:30 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:30.988 263679 INFO neutron.agent.dhcp.agent [None req-9d96f096-639d-4e2c-b0ac-4e1032ee90b9 - - - - - -] DHCP configuration for ports {'1831142c-ba32-4237-8ec6-f42419281867'} is completed#033[00m Feb 23 05:05:31 localhost dnsmasq[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/addn_hosts - 1 addresses Feb 23 05:05:31 localhost podman[326732]: 2026-02-23 10:05:31.137003621 +0000 UTC m=+0.054463866 container kill 8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b025481d-e987-40db-8a53-4c234c9213cb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:05:31 localhost dnsmasq-dhcp[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/host Feb 23 05:05:31 localhost dnsmasq-dhcp[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/opts Feb 23 05:05:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:05:31 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/974437338' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:05:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 182 KiB/s wr, 16 op/s Feb 23 05:05:31 localhost nova_compute[280321]: 2026-02-23 10:05:31.372 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:05:31 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:05:31 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:05:31 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:05:31 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 23 05:05:31 localhost nova_compute[280321]: 2026-02-23 10:05:31.582 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 05:05:31 localhost nova_compute[280321]: 2026-02-23 10:05:31.584 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11550MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 05:05:31 localhost nova_compute[280321]: 2026-02-23 10:05:31.585 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:05:31 localhost nova_compute[280321]: 2026-02-23 10:05:31.585 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:05:31 localhost nova_compute[280321]: 2026-02-23 10:05:31.655 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 05:05:31 localhost nova_compute[280321]: 2026-02-23 10:05:31.656 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 05:05:31 localhost nova_compute[280321]: 2026-02-23 10:05:31.672 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:05:31 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:31.902 263679 INFO neutron.agent.dhcp.agent [None req-b292e6a5-aa7c-4bab-8695-9a233e220d3c - - - - - -] DHCP configuration for ports {'1831142c-ba32-4237-8ec6-f42419281867'} is completed#033[00m Feb 23 05:05:31 localhost openstack_network_exporter[243519]: ERROR 10:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:05:31 localhost openstack_network_exporter[243519]: Feb 23 05:05:31 localhost openstack_network_exporter[243519]: ERROR 10:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:05:31 localhost openstack_network_exporter[243519]: Feb 23 05:05:32 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:05:32 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1581869464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:05:32 localhost nova_compute[280321]: 2026-02-23 10:05:32.178 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:05:32 localhost nova_compute[280321]: 2026-02-23 10:05:32.185 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 05:05:32 localhost nova_compute[280321]: 2026-02-23 10:05:32.209 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 05:05:32 localhost nova_compute[280321]: 2026-02-23 10:05:32.212 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 05:05:32 localhost nova_compute[280321]: 2026-02-23 10:05:32.213 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:05:32 localhost ovn_controller[155966]: 2026-02-23T10:05:32Z|00428|ovn_bfd|INFO|Enabled BFD on interface ovn-5b0126-0 Feb 23 05:05:32 localhost ovn_controller[155966]: 2026-02-23T10:05:32Z|00429|ovn_bfd|INFO|Enabled BFD on interface ovn-585d62-0 Feb 23 05:05:32 localhost ovn_controller[155966]: 2026-02-23T10:05:32Z|00430|ovn_bfd|INFO|Enabled BFD on interface ovn-b9c72d-0 Feb 23 05:05:32 localhost nova_compute[280321]: 2026-02-23 10:05:32.755 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:32 localhost nova_compute[280321]: 2026-02-23 10:05:32.768 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:32 localhost nova_compute[280321]: 2026-02-23 10:05:32.774 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:32 localhost nova_compute[280321]: 2026-02-23 10:05:32.844 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:32 localhost nova_compute[280321]: 2026-02-23 10:05:32.851 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:32 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 05:05:32 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 05:05:32 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 05:05:32 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:05:32 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 05:05:32 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev 82e860fd-6d19-46fe-835f-a1e561f021b8 (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:05:32 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev 82e860fd-6d19-46fe-835f-a1e561f021b8 (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:05:32 localhost ceph-mgr[285904]: [progress INFO root] Completed event 82e860fd-6d19-46fe-835f-a1e561f021b8 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 05:05:32 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 05:05:32 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 05:05:32 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0. Feb 23 05:05:32 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:32.992311) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 05:05:32 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55 Feb 23 05:05:32 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841132992398, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1584, "num_deletes": 251, "total_data_size": 1988916, "memory_usage": 2018816, "flush_reason": "Manual Compaction"} Feb 23 05:05:32 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841133001763, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 1301381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33008, "largest_seqno": 34587, "table_properties": {"data_size": 1295011, "index_size": 3328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16197, "raw_average_key_size": 20, "raw_value_size": 1280910, "raw_average_value_size": 1625, "num_data_blocks": 144, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771841072, "oldest_key_time": 1771841072, "file_creation_time": 1771841132, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}} Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 9511 microseconds, and 5153 cpu microseconds. Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.001827) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 1301381 bytes OK Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.001857) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.003783) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.003808) EVENT_LOG_v1 {"time_micros": 1771841133003801, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.003831) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 1980925, prev total WAL file size 1980925, number of live WAL files 2. Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.004568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353234' seq:72057594037927935, type:22 .. '6B760031373735' seq:0, type:0; will stop at (end) Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(1270KB)], [54(18MB)] Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841133004622, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 20510689, "oldest_snapshot_seqno": -1} Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 14260 keys, 19462580 bytes, temperature: kUnknown Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841133087863, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 19462580, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19378434, "index_size": 47373, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35717, "raw_key_size": 383151, "raw_average_key_size": 26, "raw_value_size": 19133271, "raw_average_value_size": 1341, "num_data_blocks": 1771, "num_entries": 14260, "num_filter_entries": 14260, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771841133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}} Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.088221) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 19462580 bytes Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.090231) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 246.1 rd, 233.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 18.3 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(30.7) write-amplify(15.0) OK, records in: 14790, records dropped: 530 output_compression: NoCompression Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.090260) EVENT_LOG_v1 {"time_micros": 1771841133090248, "job": 32, "event": "compaction_finished", "compaction_time_micros": 83335, "compaction_time_cpu_micros": 47661, "output_level": 6, "num_output_files": 1, "total_output_size": 19462580, "num_input_records": 14790, "num_output_records": 14260, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841133090651, "job": 32, "event": "table_file_deletion", "file_number": 56} Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841133093491, "job": 32, "event": "table_file_deletion", "file_number": 54} Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.004474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.093546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.093553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.093557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.093560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:05:33 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:05:33.093563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:05:33 localhost nova_compute[280321]: 2026-02-23 10:05:33.209 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:05:33 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:05:33 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 110 KiB/s wr, 11 op/s Feb 23 05:05:33 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} v 0) Feb 23 05:05:33 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:33 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} v 0) Feb 23 05:05:33 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:33 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume deauthorize, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:33 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "auth_id": "tempest-cephx-id-550070678", "format": "json"}]: dispatch Feb 23 05:05:33 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:33 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-550070678, client_metadata.root=/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596/c2facf6e-037e-4fc9-867d-c6a4c723aa9e Feb 23 05:05:33 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:33 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-550070678, format:json, prefix:fs subvolume evict, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:33 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:05:33 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:05:33 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-550070678", "format": "json"} : dispatch Feb 23 05:05:33 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:33 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"} : dispatch Feb 23 05:05:33 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-550070678"}]': finished Feb 23 05:05:33 localhost nova_compute[280321]: 2026-02-23 10:05:33.700 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:33 localhost nova_compute[280321]: 2026-02-23 10:05:33.730 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:33 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "r", "format": "json"}]: dispatch Feb 23 05:05:33 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:33 localhost nova_compute[280321]: 2026-02-23 10:05:33.804 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:33 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:05:33 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:05:33 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID alice bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:05:33 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:33 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:33 localhost nova_compute[280321]: 2026-02-23 10:05:33.887 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:05:33 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:34 localhost nova_compute[280321]: 2026-02-23 10:05:34.562 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:34 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:05:34 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:34 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:34 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow r pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:34 localhost nova_compute[280321]: 2026-02-23 10:05:34.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:05:34 localhost nova_compute[280321]: 2026-02-23 10:05:34.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:05:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:05:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:05:35 localhost nova_compute[280321]: 2026-02-23 10:05:35.147 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:05:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:05:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:05:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:05:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 110 KiB/s wr, 10 op/s Feb 23 05:05:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:35 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 05:05:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 05:05:35 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:35.957 263679 INFO neutron.agent.linux.ip_lib [None req-83bc8499-7cc3-4efa-a49c-036efff34716 - - - - - -] Device tapce8a5fd8-91 cannot be used as it has no MAC address#033[00m Feb 23 05:05:35 localhost nova_compute[280321]: 2026-02-23 10:05:35.978 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:35 localhost kernel: device tapce8a5fd8-91 entered promiscuous mode Feb 23 05:05:35 localhost NetworkManager[5987]: [1771841135.9876] manager: (tapce8a5fd8-91): new Generic device (/org/freedesktop/NetworkManager/Devices/73) Feb 23 05:05:35 localhost ovn_controller[155966]: 2026-02-23T10:05:35Z|00431|binding|INFO|Claiming lport ce8a5fd8-91a6-446f-b728-0b0ece0ff3fc for this chassis. Feb 23 05:05:35 localhost ovn_controller[155966]: 2026-02-23T10:05:35Z|00432|binding|INFO|ce8a5fd8-91a6-446f-b728-0b0ece0ff3fc: Claiming unknown Feb 23 05:05:35 localhost nova_compute[280321]: 2026-02-23 10:05:35.989 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:35 localhost systemd-udevd[326875]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:05:36 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:35.998 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/16', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-f00ba20c-5421-4377-9642-2cef4a1c6829', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00ba20c-5421-4377-9642-2cef4a1c6829', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f79d32a621b2444d9fac9131dda85cfa', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e4c79ab-d1c4-419c-bad8-24e1797e9a57, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ce8a5fd8-91a6-446f-b728-0b0ece0ff3fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:05:36 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:36.000 161842 INFO neutron.agent.ovn.metadata.agent [-] Port ce8a5fd8-91a6-446f-b728-0b0ece0ff3fc in datapath f00ba20c-5421-4377-9642-2cef4a1c6829 bound to our chassis#033[00m Feb 23 05:05:36 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:36.003 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9e040a0b-af38-401f-b3e4-ed087f677f2c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 05:05:36 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:36.003 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f00ba20c-5421-4377-9642-2cef4a1c6829, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:05:36 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:36.004 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[94623d59-754b-4eb8-b12e-bb746c14825a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:05:36 localhost journal[229268]: ethtool ioctl error on tapce8a5fd8-91: No such device Feb 23 05:05:36 localhost journal[229268]: ethtool ioctl error on tapce8a5fd8-91: No such device Feb 23 05:05:36 localhost ovn_controller[155966]: 2026-02-23T10:05:36Z|00433|binding|INFO|Setting lport ce8a5fd8-91a6-446f-b728-0b0ece0ff3fc ovn-installed in OVS Feb 23 05:05:36 localhost ovn_controller[155966]: 2026-02-23T10:05:36Z|00434|binding|INFO|Setting lport ce8a5fd8-91a6-446f-b728-0b0ece0ff3fc up in Southbound Feb 23 05:05:36 localhost nova_compute[280321]: 2026-02-23 10:05:36.028 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:36 localhost journal[229268]: ethtool ioctl error on tapce8a5fd8-91: No such device Feb 23 05:05:36 localhost journal[229268]: ethtool ioctl error on tapce8a5fd8-91: No such device Feb 23 05:05:36 localhost journal[229268]: ethtool ioctl error on tapce8a5fd8-91: No such device Feb 23 05:05:36 localhost journal[229268]: ethtool ioctl error on tapce8a5fd8-91: No such device Feb 23 05:05:36 localhost journal[229268]: ethtool ioctl error on tapce8a5fd8-91: No such device Feb 23 05:05:36 localhost journal[229268]: ethtool ioctl error on tapce8a5fd8-91: No such device Feb 23 05:05:36 localhost nova_compute[280321]: 2026-02-23 10:05:36.059 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:36 localhost nova_compute[280321]: 2026-02-23 10:05:36.087 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:36 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:05:36 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e246 e246: 6 total, 6 up, 6 in Feb 23 05:05:36 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "812d8099-4259-45c6-8802-8b5ec410d596", "format": "json"}]: dispatch Feb 23 05:05:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:812d8099-4259-45c6-8802-8b5ec410d596, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:05:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:812d8099-4259-45c6-8802-8b5ec410d596, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:05:36 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:05:36.819+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '812d8099-4259-45c6-8802-8b5ec410d596' of type subvolume Feb 23 05:05:36 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '812d8099-4259-45c6-8802-8b5ec410d596' of type subvolume Feb 23 05:05:36 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "812d8099-4259-45c6-8802-8b5ec410d596", "force": true, "format": "json"}]: dispatch Feb 23 05:05:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:36 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/812d8099-4259-45c6-8802-8b5ec410d596'' moved to trashcan Feb 23 05:05:36 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:05:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:812d8099-4259-45c6-8802-8b5ec410d596, vol_name:cephfs) < "" Feb 23 05:05:36 localhost podman[326946]: Feb 23 05:05:36 localhost podman[326946]: 2026-02-23 10:05:36.928420717 +0000 UTC m=+0.086917239 container create bc36c6fc3925e437de03124f4765fb2bf4ab63d7ed07854c7976a6ce165106c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f00ba20c-5421-4377-9642-2cef4a1c6829, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 23 05:05:36 localhost systemd[1]: Started libpod-conmon-bc36c6fc3925e437de03124f4765fb2bf4ab63d7ed07854c7976a6ce165106c1.scope. Feb 23 05:05:36 localhost podman[326946]: 2026-02-23 10:05:36.886756292 +0000 UTC m=+0.045252814 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:05:37 localhost systemd[1]: Started libcrun container. Feb 23 05:05:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5884e7e38e78e7b93efebb5fe0f19d428edfef8e25a7d6d68f853cc694f62c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:05:37 localhost podman[326946]: 2026-02-23 10:05:37.018508562 +0000 UTC m=+0.177005084 container init bc36c6fc3925e437de03124f4765fb2bf4ab63d7ed07854c7976a6ce165106c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f00ba20c-5421-4377-9642-2cef4a1c6829, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0) Feb 23 05:05:37 localhost podman[326946]: 2026-02-23 10:05:37.030083886 +0000 UTC m=+0.188580398 container start bc36c6fc3925e437de03124f4765fb2bf4ab63d7ed07854c7976a6ce165106c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f00ba20c-5421-4377-9642-2cef4a1c6829, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:05:37 localhost dnsmasq[326964]: started, version 2.85 cachesize 150 Feb 23 05:05:37 localhost dnsmasq[326964]: DNS service limited to local subnets Feb 23 05:05:37 localhost dnsmasq[326964]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:05:37 localhost dnsmasq[326964]: warning: no upstream servers configured Feb 23 05:05:37 localhost dnsmasq-dhcp[326964]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:05:37 localhost dnsmasq[326964]: read /var/lib/neutron/dhcp/f00ba20c-5421-4377-9642-2cef4a1c6829/addn_hosts - 0 addresses Feb 23 05:05:37 localhost dnsmasq-dhcp[326964]: read /var/lib/neutron/dhcp/f00ba20c-5421-4377-9642-2cef4a1c6829/host Feb 23 05:05:37 localhost dnsmasq-dhcp[326964]: read /var/lib/neutron/dhcp/f00ba20c-5421-4377-9642-2cef4a1c6829/opts Feb 23 05:05:37 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:05:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 23 05:05:37 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:05:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 23 05:05:37 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:05:37 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:37.159 263679 INFO neutron.agent.dhcp.agent [None req-ffe13741-57f5-4ff4-acc0-b0631f3967c4 - - - - - -] DHCP configuration for ports {'f9601105-4bad-4c77-a858-c023f9f42302'} is completed#033[00m Feb 23 05:05:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:37 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 23 05:05:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:37 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:05:37 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:37 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 237 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 8.4 KiB/s rd, 2.2 MiB/s wr, 24 op/s Feb 23 05:05:37 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e247 e247: 6 total, 6 up, 6 in Feb 23 05:05:37 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 23 05:05:37 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:05:37 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 23 05:05:37 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 23 05:05:37 localhost nova_compute[280321]: 2026-02-23 10:05:37.893 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:05:38 localhost ovn_controller[155966]: 2026-02-23T10:05:38Z|00435|binding|INFO|Removing iface tapce8a5fd8-91 ovn-installed in OVS Feb 23 05:05:38 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:38.726 161842 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 9e040a0b-af38-401f-b3e4-ed087f677f2c with type ""#033[00m Feb 23 05:05:38 localhost ovn_controller[155966]: 2026-02-23T10:05:38Z|00436|binding|INFO|Removing lport ce8a5fd8-91a6-446f-b728-0b0ece0ff3fc ovn-installed in OVS Feb 23 05:05:38 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:38.727 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/16', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-f00ba20c-5421-4377-9642-2cef4a1c6829', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00ba20c-5421-4377-9642-2cef4a1c6829', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f79d32a621b2444d9fac9131dda85cfa', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e4c79ab-d1c4-419c-bad8-24e1797e9a57, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ce8a5fd8-91a6-446f-b728-0b0ece0ff3fc) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:05:38 localhost nova_compute[280321]: 2026-02-23 10:05:38.728 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:38 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:38.731 161842 INFO neutron.agent.ovn.metadata.agent [-] Port ce8a5fd8-91a6-446f-b728-0b0ece0ff3fc in datapath f00ba20c-5421-4377-9642-2cef4a1c6829 unbound from our chassis#033[00m Feb 23 05:05:38 localhost nova_compute[280321]: 2026-02-23 10:05:38.736 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:38 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:38.738 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f00ba20c-5421-4377-9642-2cef4a1c6829, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:05:38 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:38.740 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[8b229003-6dd7-4d1c-a4b6-2c7eeceb7a2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:05:38 localhost dnsmasq[326964]: exiting on receipt of SIGTERM Feb 23 05:05:38 localhost podman[326983]: 2026-02-23 10:05:38.764037048 +0000 UTC m=+0.063822453 container kill bc36c6fc3925e437de03124f4765fb2bf4ab63d7ed07854c7976a6ce165106c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f00ba20c-5421-4377-9642-2cef4a1c6829, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:05:38 localhost systemd[1]: libpod-bc36c6fc3925e437de03124f4765fb2bf4ab63d7ed07854c7976a6ce165106c1.scope: Deactivated successfully. Feb 23 05:05:38 localhost podman[326999]: 2026-02-23 10:05:38.83014573 +0000 UTC m=+0.046922486 container died bc36c6fc3925e437de03124f4765fb2bf4ab63d7ed07854c7976a6ce165106c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f00ba20c-5421-4377-9642-2cef4a1c6829, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:05:38 localhost systemd[1]: tmp-crun.i2Bycu.mount: Deactivated successfully. Feb 23 05:05:38 localhost podman[326999]: 2026-02-23 10:05:38.878359375 +0000 UTC m=+0.095136051 container remove bc36c6fc3925e437de03124f4765fb2bf4ab63d7ed07854c7976a6ce165106c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f00ba20c-5421-4377-9642-2cef4a1c6829, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:05:38 localhost systemd[1]: libpod-conmon-bc36c6fc3925e437de03124f4765fb2bf4ab63d7ed07854c7976a6ce165106c1.scope: Deactivated successfully. Feb 23 05:05:38 localhost nova_compute[280321]: 2026-02-23 10:05:38.918 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:38 localhost kernel: device tapce8a5fd8-91 left promiscuous mode Feb 23 05:05:38 localhost systemd[1]: var-lib-containers-storage-overlay-d5884e7e38e78e7b93efebb5fe0f19d428edfef8e25a7d6d68f853cc694f62c4-merged.mount: Deactivated successfully. Feb 23 05:05:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc36c6fc3925e437de03124f4765fb2bf4ab63d7ed07854c7976a6ce165106c1-userdata-shm.mount: Deactivated successfully. Feb 23 05:05:38 localhost nova_compute[280321]: 2026-02-23 10:05:38.935 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:38 localhost systemd[1]: run-netns-qdhcp\x2df00ba20c\x2d5421\x2d4377\x2d9642\x2d2cef4a1c6829.mount: Deactivated successfully. Feb 23 05:05:38 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:38.951 263679 INFO neutron.agent.dhcp.agent [None req-443fb6d8-c947-4fa5-a7f4-552cdd201c01 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:05:38 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:38.968 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:05:39 localhost nova_compute[280321]: 2026-02-23 10:05:39.152 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 237 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 2.6 MiB/s wr, 21 op/s Feb 23 05:05:40 localhost nova_compute[280321]: 2026-02-23 10:05:40.186 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:40 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:05:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 23 05:05:40 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 23 05:05:40 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: Creating meta for ID bob with tenant b8a78bca43aa415e9b740fe00d08afee Feb 23 05:05:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} v 0) Feb 23 05:05:40 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:40 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:40 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 23 05:05:40 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:40 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"} : dispatch Feb 23 05:05:40 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02", "mon", "allow r"], "format": "json"}]': finished Feb 23 05:05:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 230 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 2.7 MiB/s wr, 63 op/s Feb 23 05:05:42 localhost podman[241086]: time="2026-02-23T10:05:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:05:42 localhost podman[241086]: @ - - [23/Feb/2026:10:05:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1" Feb 23 05:05:42 localhost podman[241086]: @ - - [23/Feb/2026:10:05:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18303 "" "Go-http-client/1.1" Feb 23 05:05:42 localhost ovn_controller[155966]: 2026-02-23T10:05:42Z|00437|ovn_bfd|INFO|Disabled BFD on interface ovn-5b0126-0 Feb 23 05:05:42 localhost ovn_controller[155966]: 2026-02-23T10:05:42Z|00438|ovn_bfd|INFO|Disabled BFD on interface ovn-585d62-0 Feb 23 05:05:42 localhost ovn_controller[155966]: 2026-02-23T10:05:42Z|00439|ovn_bfd|INFO|Disabled BFD on interface ovn-b9c72d-0 Feb 23 05:05:42 localhost nova_compute[280321]: 2026-02-23 10:05:42.914 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:42 localhost nova_compute[280321]: 2026-02-23 10:05:42.916 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:42 localhost nova_compute[280321]: 2026-02-23 10:05:42.926 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:43 localhost dnsmasq[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/addn_hosts - 0 addresses Feb 23 05:05:43 localhost podman[327042]: 2026-02-23 10:05:43.052826107 +0000 UTC m=+0.046242836 container kill 8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b025481d-e987-40db-8a53-4c234c9213cb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 05:05:43 localhost dnsmasq-dhcp[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/host Feb 23 05:05:43 localhost dnsmasq-dhcp[326659]: read /var/lib/neutron/dhcp/b025481d-e987-40db-8a53-4c234c9213cb/opts Feb 23 05:05:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:05:43 localhost podman[327057]: 2026-02-23 10:05:43.166850194 +0000 UTC m=+0.087530529 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 05:05:43 localhost podman[327057]: 2026-02-23 10:05:43.17785571 +0000 UTC m=+0.098536085 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 05:05:43 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:05:43 localhost ovn_controller[155966]: 2026-02-23T10:05:43Z|00440|binding|INFO|Releasing lport 67b9e7ae-b98f-4e79-9933-6256f2e4ff3c from this chassis (sb_readonly=0) Feb 23 05:05:43 localhost ovn_controller[155966]: 2026-02-23T10:05:43Z|00441|binding|INFO|Setting lport 67b9e7ae-b98f-4e79-9933-6256f2e4ff3c down in Southbound Feb 23 05:05:43 localhost kernel: device tap67b9e7ae-b9 left promiscuous mode Feb 23 05:05:43 localhost nova_compute[280321]: 2026-02-23 10:05:43.222 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:43 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:43.240 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-b025481d-e987-40db-8a53-4c234c9213cb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b025481d-e987-40db-8a53-4c234c9213cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f79d32a621b2444d9fac9131dda85cfa', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd69dfcd-7e68-49c9-ac78-a5dbefd27d2a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=67b9e7ae-b98f-4e79-9933-6256f2e4ff3c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:05:43 localhost nova_compute[280321]: 2026-02-23 10:05:43.241 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:43 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:43.243 161842 INFO neutron.agent.ovn.metadata.agent [-] Port 67b9e7ae-b98f-4e79-9933-6256f2e4ff3c in datapath b025481d-e987-40db-8a53-4c234c9213cb unbound from our chassis#033[00m Feb 23 05:05:43 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:43.245 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b025481d-e987-40db-8a53-4c234c9213cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:05:43 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:43.246 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[3476d1ec-902e-4ce6-ae0f-d2ec54aa646b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:05:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 2.7 MiB/s wr, 67 op/s Feb 23 05:05:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:05:44 localhost podman[327087]: 2026-02-23 10:05:44.007832765 +0000 UTC m=+0.079898195 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 23 05:05:44 localhost podman[327087]: 2026-02-23 10:05:44.020376468 +0000 UTC m=+0.092441888 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.7, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, name=ubi9/ubi-minimal, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 23 05:05:44 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:05:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0f0c4f00-1527-445a-bf94-f839c0a6f476", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:05:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, vol_name:cephfs) < "" Feb 23 05:05:44 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0f0c4f00-1527-445a-bf94-f839c0a6f476/.meta.tmp' Feb 23 05:05:44 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0f0c4f00-1527-445a-bf94-f839c0a6f476/.meta.tmp' to config b'/volumes/_nogroup/0f0c4f00-1527-445a-bf94-f839c0a6f476/.meta' Feb 23 05:05:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, vol_name:cephfs) < "" Feb 23 05:05:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0f0c4f00-1527-445a-bf94-f839c0a6f476", "format": "json"}]: dispatch Feb 23 05:05:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, vol_name:cephfs) < "" Feb 23 05:05:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, vol_name:cephfs) < "" Feb 23 05:05:44 localhost dnsmasq[326659]: exiting on receipt of SIGTERM Feb 23 05:05:44 localhost podman[327122]: 2026-02-23 10:05:44.649750477 +0000 UTC m=+0.066874836 container kill 8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b025481d-e987-40db-8a53-4c234c9213cb, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0) Feb 23 05:05:44 localhost systemd[1]: libpod-8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2.scope: Deactivated successfully. Feb 23 05:05:44 localhost podman[327135]: 2026-02-23 10:05:44.720245013 +0000 UTC m=+0.055162968 container died 8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b025481d-e987-40db-8a53-4c234c9213cb, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 23 05:05:44 localhost systemd[1]: tmp-crun.q7JSt1.mount: Deactivated successfully. Feb 23 05:05:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2-userdata-shm.mount: Deactivated successfully. Feb 23 05:05:44 localhost podman[327135]: 2026-02-23 10:05:44.752669215 +0000 UTC m=+0.087587160 container cleanup 8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b025481d-e987-40db-8a53-4c234c9213cb, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 05:05:44 localhost systemd[1]: libpod-conmon-8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2.scope: Deactivated successfully. Feb 23 05:05:44 localhost podman[327137]: 2026-02-23 10:05:44.795186225 +0000 UTC m=+0.121915270 container remove 8555c0cc25d942db22795f03b6ecb2247ea1d080ff6264214a142650868ca6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b025481d-e987-40db-8a53-4c234c9213cb, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216) Feb 23 05:05:44 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:44.827 263679 INFO neutron.agent.dhcp.agent [None req-f44089c3-eb2c-44fd-a665-7d4e51ea83ec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:05:44 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:05:44.828 263679 INFO neutron.agent.dhcp.agent [None req-f44089c3-eb2c-44fd-a665-7d4e51ea83ec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:05:44 localhost nova_compute[280321]: 2026-02-23 10:05:44.962 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:45 localhost nova_compute[280321]: 2026-02-23 10:05:45.190 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 1.5 MiB/s wr, 44 op/s Feb 23 05:05:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:45 localhost systemd[1]: var-lib-containers-storage-overlay-6dc9e7a69627b774015c2087c0a1a52e22a298852d9dfcf7a541c1cc25fe4513-merged.mount: Deactivated successfully. Feb 23 05:05:45 localhost systemd[1]: run-netns-qdhcp\x2db025481d\x2de987\x2d40db\x2d8a53\x2d4c234c9213cb.mount: Deactivated successfully. Feb 23 05:05:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 78 KiB/s wr, 38 op/s Feb 23 05:05:47 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0f0c4f00-1527-445a-bf94-f839c0a6f476", "auth_id": "bob", "tenant_id": "b8a78bca43aa415e9b740fe00d08afee", "access_level": "rw", "format": "json"}]: dispatch Feb 23 05:05:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 23 05:05:47 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 23 05:05:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:05:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae,allow rw path=/volumes/_nogroup/0f0c4f00-1527-445a-bf94-f839c0a6f476/15dadb0f-d665-4d06-aa15-aab1aaac7a26", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02,allow rw pool=manila_data namespace=fsvolumens_0f0c4f00-1527-445a-bf94-f839c0a6f476"]} v 0) Feb 23 05:05:47 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae,allow rw path=/volumes/_nogroup/0f0c4f00-1527-445a-bf94-f839c0a6f476/15dadb0f-d665-4d06-aa15-aab1aaac7a26", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02,allow rw pool=manila_data namespace=fsvolumens_0f0c4f00-1527-445a-bf94-f839c0a6f476"]} : dispatch Feb 23 05:05:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 23 05:05:47 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 23 05:05:47 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 e248: 6 total, 6 up, 6 in Feb 23 05:05:48 localhost podman[327163]: 2026-02-23 10:05:48.025853892 +0000 UTC m=+0.095962866 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260216) Feb 23 05:05:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, tenant_id:b8a78bca43aa415e9b740fe00d08afee, vol_name:cephfs) < "" Feb 23 05:05:48 localhost podman[327163]: 2026-02-23 10:05:48.069049533 +0000 UTC m=+0.139158527 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0) Feb 23 05:05:48 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:05:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:48.320 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:05:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:48.321 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:05:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:48.321 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:05:48 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 23 05:05:48 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae,allow rw path=/volumes/_nogroup/0f0c4f00-1527-445a-bf94-f839c0a6f476/15dadb0f-d665-4d06-aa15-aab1aaac7a26", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02,allow rw pool=manila_data namespace=fsvolumens_0f0c4f00-1527-445a-bf94-f839c0a6f476"]} : dispatch Feb 23 05:05:48 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae,allow rw path=/volumes/_nogroup/0f0c4f00-1527-445a-bf94-f839c0a6f476/15dadb0f-d665-4d06-aa15-aab1aaac7a26", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02,allow rw pool=manila_data namespace=fsvolumens_0f0c4f00-1527-445a-bf94-f839c0a6f476"]} : dispatch Feb 23 05:05:48 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae,allow rw path=/volumes/_nogroup/0f0c4f00-1527-445a-bf94-f839c0a6f476/15dadb0f-d665-4d06-aa15-aab1aaac7a26", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02,allow rw pool=manila_data namespace=fsvolumens_0f0c4f00-1527-445a-bf94-f839c0a6f476"]}]': finished Feb 23 05:05:48 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 23 05:05:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 78 KiB/s wr, 38 op/s Feb 23 05:05:50 localhost nova_compute[280321]: 2026-02-23 10:05:50.191 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:05:50 localhost nova_compute[280321]: 2026-02-23 10:05:50.193 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:05:50 localhost nova_compute[280321]: 2026-02-23 10:05:50.194 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:05:50 localhost nova_compute[280321]: 2026-02-23 10:05:50.194 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:05:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:50 localhost nova_compute[280321]: 2026-02-23 10:05:50.613 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:50 localhost nova_compute[280321]: 2026-02-23 10:05:50.614 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:05:50 localhost nova_compute[280321]: 2026-02-23 10:05:50.615 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0f0c4f00-1527-445a-bf94-f839c0a6f476", "auth_id": "bob", "format": "json"}]: dispatch Feb 23 05:05:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, vol_name:cephfs) < "" Feb 23 05:05:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 23 05:05:51 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 23 05:05:51 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02"]} v 0) Feb 23 05:05:51 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02"]} : dispatch Feb 23 05:05:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, vol_name:cephfs) < "" Feb 23 05:05:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0f0c4f00-1527-445a-bf94-f839c0a6f476", "auth_id": "bob", "format": "json"}]: dispatch Feb 23 05:05:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, vol_name:cephfs) < "" Feb 23 05:05:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/0f0c4f00-1527-445a-bf94-f839c0a6f476/15dadb0f-d665-4d06-aa15-aab1aaac7a26 Feb 23 05:05:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, vol_name:cephfs) < "" Feb 23 05:05:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 58 KiB/s wr, 6 op/s Feb 23 05:05:51 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 23 05:05:51 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02"]} : dispatch Feb 23 05:05:51 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02"]} : dispatch Feb 23 05:05:51 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae", "osd", "allow rw pool=manila_data namespace=fsvolumens_a87f3747-06f6-4188-82ee-060b8ce9fc02"]}]': finished Feb 23 05:05:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 57 KiB/s wr, 4 op/s Feb 23 05:05:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "bob", "format": "json"}]: dispatch Feb 23 05:05:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 23 05:05:54 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 23 05:05:54 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) Feb 23 05:05:54 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Feb 23 05:05:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "auth_id": "bob", "format": "json"}]: dispatch Feb 23 05:05:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02/83a6be9d-9350-4487-b928-33718d580aae Feb 23 05:05:54 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 23 05:05:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:05:55 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 23 05:05:55 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Feb 23 05:05:55 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Feb 23 05:05:55 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Feb 23 05:05:55 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:55.077 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:05:55 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:55.078 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 05:05:55 localhost nova_compute[280321]: 2026-02-23 10:05:55.117 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 57 KiB/s wr, 4 op/s Feb 23 05:05:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:05:55 localhost nova_compute[280321]: 2026-02-23 10:05:55.617 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:05:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:05:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:05:56 localhost systemd[1]: tmp-crun.JBjtQm.mount: Deactivated successfully. Feb 23 05:05:56 localhost podman[327192]: 2026-02-23 10:05:56.016245741 +0000 UTC m=+0.091087387 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 23 05:05:56 localhost podman[327192]: 2026-02-23 10:05:56.026663269 +0000 UTC m=+0.101504895 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2) Feb 23 05:05:56 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:05:56 localhost ovn_metadata_agent[161837]: 2026-02-23 10:05:56.080 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 05:05:56 localhost systemd[1]: tmp-crun.SvDSKL.mount: Deactivated successfully. Feb 23 05:05:56 localhost podman[327191]: 2026-02-23 10:05:56.11726834 +0000 UTC m=+0.190274060 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Feb 23 05:05:56 localhost podman[327191]: 2026-02-23 10:05:56.127768902 +0000 UTC m=+0.200774672 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 05:05:56 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:05:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 76 KiB/s wr, 5 op/s Feb 23 05:05:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0f0c4f00-1527-445a-bf94-f839c0a6f476", "format": "json"}]: dispatch Feb 23 05:05:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:05:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:05:58 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:05:58.228+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0f0c4f00-1527-445a-bf94-f839c0a6f476' of type subvolume Feb 23 05:05:58 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0f0c4f00-1527-445a-bf94-f839c0a6f476' of type subvolume Feb 23 05:05:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0f0c4f00-1527-445a-bf94-f839c0a6f476", "force": true, "format": "json"}]: dispatch Feb 23 05:05:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, vol_name:cephfs) < "" Feb 23 05:05:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0f0c4f00-1527-445a-bf94-f839c0a6f476'' moved to trashcan Feb 23 05:05:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:05:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0f0c4f00-1527-445a-bf94-f839c0a6f476, vol_name:cephfs) < "" Feb 23 05:05:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 67 KiB/s wr, 5 op/s Feb 23 05:05:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:06:00 localhost podman[327229]: 2026-02-23 10:06:00.002182937 +0000 UTC m=+0.076660785 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 05:06:00 localhost podman[327229]: 2026-02-23 10:06:00.014829104 +0000 UTC m=+0.089306952 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:06:00 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:06:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:00 localhost nova_compute[280321]: 2026-02-23 10:06:00.619 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:00 localhost nova_compute[280321]: 2026-02-23 10:06:00.621 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:00 localhost nova_compute[280321]: 2026-02-23 10:06:00.621 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:00 localhost nova_compute[280321]: 2026-02-23 10:06:00.621 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:00 localhost nova_compute[280321]: 2026-02-23 10:06:00.652 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:00 localhost nova_compute[280321]: 2026-02-23 10:06:00.654 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 78 KiB/s wr, 6 op/s Feb 23 05:06:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "format": "json"}]: dispatch Feb 23 05:06:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:01 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:06:01.444+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a87f3747-06f6-4188-82ee-060b8ce9fc02' of type subvolume Feb 23 05:06:01 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a87f3747-06f6-4188-82ee-060b8ce9fc02' of type subvolume Feb 23 05:06:01 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a87f3747-06f6-4188-82ee-060b8ce9fc02", "force": true, "format": "json"}]: dispatch Feb 23 05:06:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:06:01 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a87f3747-06f6-4188-82ee-060b8ce9fc02'' moved to trashcan Feb 23 05:06:01 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:06:01 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a87f3747-06f6-4188-82ee-060b8ce9fc02, vol_name:cephfs) < "" Feb 23 05:06:01 localhost openstack_network_exporter[243519]: ERROR 10:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:06:01 localhost openstack_network_exporter[243519]: Feb 23 05:06:01 localhost openstack_network_exporter[243519]: ERROR 10:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:06:01 localhost openstack_network_exporter[243519]: Feb 23 05:06:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 49 KiB/s wr, 4 op/s Feb 23 05:06:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_10:06:05 Feb 23 05:06:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 05:06:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 05:06:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['.mgr', 'volumes', 'manila_metadata', 'manila_data', 'vms', 'images', 'backups'] Feb 23 05:06:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 05:06:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:06:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:06:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:06:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:06:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:06:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:06:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 49 KiB/s wr, 4 op/s Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014869268216080402 of space, bias 1.0, pg target 0.2968897220477387 quantized to 32 (current 32) Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010850694444444444 quantized to 32 (current 32) Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:06:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.001889874720826354 of space, bias 4.0, pg target 1.504340277777778 quantized to 16 (current 16) Feb 23 05:06:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 05:06:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:06:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 05:06:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:06:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:06:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:06:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:06:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:06:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:06:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:06:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:05 localhost nova_compute[280321]: 2026-02-23 10:06:05.655 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:05 localhost nova_compute[280321]: 2026-02-23 10:06:05.657 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:05 localhost nova_compute[280321]: 2026-02-23 10:06:05.657 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:05 localhost nova_compute[280321]: 2026-02-23 10:06:05.657 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:05 localhost nova_compute[280321]: 2026-02-23 10:06:05.699 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:05 localhost nova_compute[280321]: 2026-02-23 10:06:05.700 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:07 localhost sshd[327254]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:06:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 73 KiB/s wr, 5 op/s Feb 23 05:06:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 38 KiB/s wr, 3 op/s Feb 23 05:06:09 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "068d76ef-57bb-47e8-bc0e-5cafb295f112", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:06:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:068d76ef-57bb-47e8-bc0e-5cafb295f112, vol_name:cephfs) < "" Feb 23 05:06:09 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/068d76ef-57bb-47e8-bc0e-5cafb295f112/.meta.tmp' Feb 23 05:06:09 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/068d76ef-57bb-47e8-bc0e-5cafb295f112/.meta.tmp' to config b'/volumes/_nogroup/068d76ef-57bb-47e8-bc0e-5cafb295f112/.meta' Feb 23 05:06:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:068d76ef-57bb-47e8-bc0e-5cafb295f112, vol_name:cephfs) < "" Feb 23 05:06:09 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "068d76ef-57bb-47e8-bc0e-5cafb295f112", "format": "json"}]: dispatch Feb 23 05:06:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:068d76ef-57bb-47e8-bc0e-5cafb295f112, vol_name:cephfs) < "" Feb 23 05:06:09 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:068d76ef-57bb-47e8-bc0e-5cafb295f112, vol_name:cephfs) < "" Feb 23 05:06:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:10 localhost nova_compute[280321]: 2026-02-23 10:06:10.701 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:10 localhost nova_compute[280321]: 2026-02-23 10:06:10.703 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:10 localhost nova_compute[280321]: 2026-02-23 10:06:10.704 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:10 localhost nova_compute[280321]: 2026-02-23 10:06:10.704 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:10 localhost nova_compute[280321]: 2026-02-23 10:06:10.745 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:10 localhost nova_compute[280321]: 2026-02-23 10:06:10.745 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 47 KiB/s wr, 3 op/s Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0. Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.479269) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58 Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841171479327, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 867, "num_deletes": 252, "total_data_size": 884508, "memory_usage": 900328, "flush_reason": "Manual Compaction"} Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841171486671, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 576994, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34592, "largest_seqno": 35454, "table_properties": {"data_size": 573110, "index_size": 1611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10096, "raw_average_key_size": 20, "raw_value_size": 564774, "raw_average_value_size": 1166, "num_data_blocks": 71, "num_entries": 484, "num_filter_entries": 484, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771841133, "oldest_key_time": 1771841133, "file_creation_time": 1771841171, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}} Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 7448 microseconds, and 2688 cpu microseconds. Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.486715) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 576994 bytes OK Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.486738) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.489903) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.489922) EVENT_LOG_v1 {"time_micros": 1771841171489916, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.489937) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 879891, prev total WAL file size 879891, number of live WAL files 2. Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.490544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(563KB)], [57(18MB)] Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841171490617, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 20039574, "oldest_snapshot_seqno": -1} Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 14217 keys, 18348091 bytes, temperature: kUnknown Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841171573890, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 18348091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18265451, "index_size": 45995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35589, "raw_key_size": 383010, "raw_average_key_size": 26, "raw_value_size": 18022216, "raw_average_value_size": 1267, "num_data_blocks": 1708, "num_entries": 14217, "num_filter_entries": 14217, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771841171, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}} Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.574389) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 18348091 bytes Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.576265) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.9 rd, 219.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 18.6 +0.0 blob) out(17.5 +0.0 blob), read-write-amplify(66.5) write-amplify(31.8) OK, records in: 14744, records dropped: 527 output_compression: NoCompression Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.576294) EVENT_LOG_v1 {"time_micros": 1771841171576282, "job": 34, "event": "compaction_finished", "compaction_time_micros": 83539, "compaction_time_cpu_micros": 51919, "output_level": 6, "num_output_files": 1, "total_output_size": 18348091, "num_input_records": 14744, "num_output_records": 14217, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841171577169, "job": 34, "event": "table_file_deletion", "file_number": 59} Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841171580559, "job": 34, "event": "table_file_deletion", "file_number": 57} Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.490409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.580750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.580757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.580759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.580761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:06:11 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:06:11.580764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:06:12 localhost podman[241086]: time="2026-02-23T10:06:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:06:12 localhost podman[241086]: @ - - [23/Feb/2026:10:06:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:06:12 localhost podman[241086]: @ - - [23/Feb/2026:10:06:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17826 "" "Go-http-client/1.1" Feb 23 05:06:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 33 KiB/s wr, 2 op/s Feb 23 05:06:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9ee6a803-0cca-4521-9b16-39980c0fff51", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:06:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ee6a803-0cca-4521-9b16-39980c0fff51, vol_name:cephfs) < "" Feb 23 05:06:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9ee6a803-0cca-4521-9b16-39980c0fff51/.meta.tmp' Feb 23 05:06:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9ee6a803-0cca-4521-9b16-39980c0fff51/.meta.tmp' to config b'/volumes/_nogroup/9ee6a803-0cca-4521-9b16-39980c0fff51/.meta' Feb 23 05:06:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ee6a803-0cca-4521-9b16-39980c0fff51, vol_name:cephfs) < "" Feb 23 05:06:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9ee6a803-0cca-4521-9b16-39980c0fff51", "format": "json"}]: dispatch Feb 23 05:06:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ee6a803-0cca-4521-9b16-39980c0fff51, vol_name:cephfs) < "" Feb 23 05:06:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ee6a803-0cca-4521-9b16-39980c0fff51, vol_name:cephfs) < "" Feb 23 05:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:06:14 localhost podman[327256]: 2026-02-23 10:06:13.99951862 +0000 UTC m=+0.078383188 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 05:06:14 localhost podman[327256]: 2026-02-23 10:06:14.033462048 +0000 UTC m=+0.112326636 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 05:06:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:06:14 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:06:14 localhost podman[327280]: 2026-02-23 10:06:14.149355703 +0000 UTC m=+0.077298815 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, config_id=openstack_network_exporter, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=) Feb 23 05:06:14 localhost podman[327280]: 2026-02-23 10:06:14.186611613 +0000 UTC m=+0.114554755 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1770267347, distribution-scope=public, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 23 05:06:14 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:06:14 localhost ovn_controller[155966]: 2026-02-23T10:06:14Z|00442|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory Feb 23 05:06:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 32 KiB/s wr, 2 op/s Feb 23 05:06:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:15 localhost nova_compute[280321]: 2026-02-23 10:06:15.746 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:15 localhost nova_compute[280321]: 2026-02-23 10:06:15.748 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:15 localhost nova_compute[280321]: 2026-02-23 10:06:15.748 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:15 localhost nova_compute[280321]: 2026-02-23 10:06:15.748 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:15 localhost nova_compute[280321]: 2026-02-23 10:06:15.774 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:15 localhost nova_compute[280321]: 2026-02-23 10:06:15.774 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:17 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b02fa7b7-50ec-418a-bd71-13045a2641bc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:06:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b02fa7b7-50ec-418a-bd71-13045a2641bc, vol_name:cephfs) < "" Feb 23 05:06:17 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b02fa7b7-50ec-418a-bd71-13045a2641bc/.meta.tmp' Feb 23 05:06:17 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b02fa7b7-50ec-418a-bd71-13045a2641bc/.meta.tmp' to config b'/volumes/_nogroup/b02fa7b7-50ec-418a-bd71-13045a2641bc/.meta' Feb 23 05:06:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b02fa7b7-50ec-418a-bd71-13045a2641bc, vol_name:cephfs) < "" Feb 23 05:06:17 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b02fa7b7-50ec-418a-bd71-13045a2641bc", "format": "json"}]: dispatch Feb 23 05:06:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b02fa7b7-50ec-418a-bd71-13045a2641bc, vol_name:cephfs) < "" Feb 23 05:06:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b02fa7b7-50ec-418a-bd71-13045a2641bc, vol_name:cephfs) < "" Feb 23 05:06:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 51 KiB/s wr, 3 op/s Feb 23 05:06:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:06:19 localhost podman[327300]: 2026-02-23 10:06:18.999109919 +0000 UTC m=+0.075409507 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:06:19 localhost podman[327300]: 2026-02-23 10:06:19.034867743 +0000 UTC m=+0.111167391 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible) Feb 23 05:06:19 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:06:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s wr, 1 op/s Feb 23 05:06:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "35bee8f5-0cb3-465a-86d6-ab23214f69ab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:06:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:35bee8f5-0cb3-465a-86d6-ab23214f69ab, vol_name:cephfs) < "" Feb 23 05:06:20 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/35bee8f5-0cb3-465a-86d6-ab23214f69ab/.meta.tmp' Feb 23 05:06:20 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/35bee8f5-0cb3-465a-86d6-ab23214f69ab/.meta.tmp' to config b'/volumes/_nogroup/35bee8f5-0cb3-465a-86d6-ab23214f69ab/.meta' Feb 23 05:06:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:35bee8f5-0cb3-465a-86d6-ab23214f69ab, vol_name:cephfs) < "" Feb 23 05:06:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "35bee8f5-0cb3-465a-86d6-ab23214f69ab", "format": "json"}]: dispatch Feb 23 05:06:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:35bee8f5-0cb3-465a-86d6-ab23214f69ab, vol_name:cephfs) < "" Feb 23 05:06:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:35bee8f5-0cb3-465a-86d6-ab23214f69ab, vol_name:cephfs) < "" Feb 23 05:06:20 localhost nova_compute[280321]: 2026-02-23 10:06:20.775 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:20 localhost nova_compute[280321]: 2026-02-23 10:06:20.777 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:20 localhost nova_compute[280321]: 2026-02-23 10:06:20.777 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:20 localhost nova_compute[280321]: 2026-02-23 10:06:20.777 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:20 localhost nova_compute[280321]: 2026-02-23 10:06:20.820 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:20 localhost nova_compute[280321]: 2026-02-23 10:06:20.820 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 51 KiB/s wr, 3 op/s Feb 23 05:06:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s wr, 2 op/s Feb 23 05:06:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "35bee8f5-0cb3-465a-86d6-ab23214f69ab", "format": "json"}]: dispatch Feb 23 05:06:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:35bee8f5-0cb3-465a-86d6-ab23214f69ab, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:35bee8f5-0cb3-465a-86d6-ab23214f69ab, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:24 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:06:24.412+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '35bee8f5-0cb3-465a-86d6-ab23214f69ab' of type subvolume Feb 23 05:06:24 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '35bee8f5-0cb3-465a-86d6-ab23214f69ab' of type subvolume Feb 23 05:06:24 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "35bee8f5-0cb3-465a-86d6-ab23214f69ab", "force": true, "format": "json"}]: dispatch Feb 23 05:06:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:35bee8f5-0cb3-465a-86d6-ab23214f69ab, vol_name:cephfs) < "" Feb 23 05:06:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/35bee8f5-0cb3-465a-86d6-ab23214f69ab'' moved to trashcan Feb 23 05:06:24 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:06:24 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:35bee8f5-0cb3-465a-86d6-ab23214f69ab, vol_name:cephfs) < "" Feb 23 05:06:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s wr, 2 op/s Feb 23 05:06:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:25 localhost nova_compute[280321]: 2026-02-23 10:06:25.821 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:25 localhost nova_compute[280321]: 2026-02-23 10:06:25.848 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:25 localhost nova_compute[280321]: 2026-02-23 10:06:25.848 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5027 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:25 localhost nova_compute[280321]: 2026-02-23 10:06:25.848 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:25 localhost nova_compute[280321]: 2026-02-23 10:06:25.850 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:25 localhost nova_compute[280321]: 2026-02-23 10:06:25.850 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:06:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:06:26 localhost podman[327325]: 2026-02-23 10:06:26.995259645 +0000 UTC m=+0.065300469 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 23 05:06:27 localhost podman[327325]: 2026-02-23 10:06:27.005916071 +0000 UTC m=+0.075956865 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute) Feb 23 05:06:27 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:06:27 localhost systemd[1]: tmp-crun.fTRJaE.mount: Deactivated successfully. Feb 23 05:06:27 localhost podman[327324]: 2026-02-23 10:06:27.070589169 +0000 UTC m=+0.141110857 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260216, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Feb 23 05:06:27 localhost podman[327324]: 2026-02-23 10:06:27.099791932 +0000 UTC m=+0.170313590 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible) Feb 23 05:06:27 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:06:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s wr, 3 op/s Feb 23 05:06:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b02fa7b7-50ec-418a-bd71-13045a2641bc", "format": "json"}]: dispatch Feb 23 05:06:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b02fa7b7-50ec-418a-bd71-13045a2641bc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b02fa7b7-50ec-418a-bd71-13045a2641bc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:27 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:06:27.614+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b02fa7b7-50ec-418a-bd71-13045a2641bc' of type subvolume Feb 23 05:06:27 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b02fa7b7-50ec-418a-bd71-13045a2641bc' of type subvolume Feb 23 05:06:27 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b02fa7b7-50ec-418a-bd71-13045a2641bc", "force": true, "format": "json"}]: dispatch Feb 23 05:06:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b02fa7b7-50ec-418a-bd71-13045a2641bc, vol_name:cephfs) < "" Feb 23 05:06:27 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b02fa7b7-50ec-418a-bd71-13045a2641bc'' moved to trashcan Feb 23 05:06:27 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:06:27 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b02fa7b7-50ec-418a-bd71-13045a2641bc, vol_name:cephfs) < "" Feb 23 05:06:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 48 KiB/s wr, 2 op/s Feb 23 05:06:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.851 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.853 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.853 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.853 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9ee6a803-0cca-4521-9b16-39980c0fff51", "format": "json"}]: dispatch Feb 23 05:06:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9ee6a803-0cca-4521-9b16-39980c0fff51, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9ee6a803-0cca-4521-9b16-39980c0fff51, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:30 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:06:30.863+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ee6a803-0cca-4521-9b16-39980c0fff51' of type subvolume Feb 23 05:06:30 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ee6a803-0cca-4521-9b16-39980c0fff51' of type subvolume Feb 23 05:06:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9ee6a803-0cca-4521-9b16-39980c0fff51", "force": true, "format": "json"}]: dispatch Feb 23 05:06:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ee6a803-0cca-4521-9b16-39980c0fff51, vol_name:cephfs) < "" Feb 23 05:06:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9ee6a803-0cca-4521-9b16-39980c0fff51'' moved to trashcan Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 05:06:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:06:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ee6a803-0cca-4521-9b16-39980c0fff51, vol_name:cephfs) < "" Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.897 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.897 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.907 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.908 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.908 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:06:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.929 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.929 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.930 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.930 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 05:06:30 localhost nova_compute[280321]: 2026-02-23 10:06:30.930 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:06:31 localhost podman[327361]: 2026-02-23 10:06:31.01129416 +0000 UTC m=+0.085026481 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:06:31 localhost podman[327361]: 2026-02-23 10:06:31.019170681 +0000 UTC m=+0.092902972 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:06:31 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:06:31 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:06:31 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3290805769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:06:31 localhost nova_compute[280321]: 2026-02-23 10:06:31.371 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:06:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 69 KiB/s wr, 4 op/s Feb 23 05:06:31 localhost nova_compute[280321]: 2026-02-23 10:06:31.533 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 05:06:31 localhost nova_compute[280321]: 2026-02-23 10:06:31.533 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11557MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 05:06:31 localhost nova_compute[280321]: 2026-02-23 10:06:31.534 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:06:31 localhost nova_compute[280321]: 2026-02-23 10:06:31.534 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:06:31 localhost nova_compute[280321]: 2026-02-23 10:06:31.617 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 05:06:31 localhost nova_compute[280321]: 2026-02-23 10:06:31.617 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 05:06:31 localhost nova_compute[280321]: 2026-02-23 10:06:31.637 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:06:32 localhost openstack_network_exporter[243519]: ERROR 10:06:32 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:06:32 localhost openstack_network_exporter[243519]: Feb 23 05:06:32 localhost openstack_network_exporter[243519]: ERROR 10:06:32 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:06:32 localhost openstack_network_exporter[243519]: Feb 23 05:06:32 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:06:32 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3505312169' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:06:32 localhost nova_compute[280321]: 2026-02-23 10:06:32.095 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:06:32 localhost nova_compute[280321]: 2026-02-23 10:06:32.100 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 05:06:32 localhost nova_compute[280321]: 2026-02-23 10:06:32.136 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 05:06:32 localhost nova_compute[280321]: 2026-02-23 10:06:32.138 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 05:06:32 localhost nova_compute[280321]: 2026-02-23 10:06:32.138 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:06:33 localhost nova_compute[280321]: 2026-02-23 10:06:33.123 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:06:33 localhost nova_compute[280321]: 2026-02-23 10:06:33.124 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:06:33 localhost nova_compute[280321]: 2026-02-23 10:06:33.124 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 05:06:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 45 KiB/s wr, 2 op/s Feb 23 05:06:33 localhost nova_compute[280321]: 2026-02-23 10:06:33.888 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:06:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "068d76ef-57bb-47e8-bc0e-5cafb295f112", "format": "json"}]: dispatch Feb 23 05:06:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:068d76ef-57bb-47e8-bc0e-5cafb295f112, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:068d76ef-57bb-47e8-bc0e-5cafb295f112, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:34 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:06:34.122+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '068d76ef-57bb-47e8-bc0e-5cafb295f112' of type subvolume Feb 23 05:06:34 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '068d76ef-57bb-47e8-bc0e-5cafb295f112' of type subvolume Feb 23 05:06:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 05:06:34 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 05:06:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 05:06:34 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:06:34 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "068d76ef-57bb-47e8-bc0e-5cafb295f112", "force": true, "format": "json"}]: dispatch Feb 23 05:06:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:068d76ef-57bb-47e8-bc0e-5cafb295f112, vol_name:cephfs) < "" Feb 23 05:06:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 05:06:34 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/068d76ef-57bb-47e8-bc0e-5cafb295f112'' moved to trashcan Feb 23 05:06:34 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:06:34 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:068d76ef-57bb-47e8-bc0e-5cafb295f112, vol_name:cephfs) < "" Feb 23 05:06:34 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev ce40611f-f508-4663-8754-efbef5a4f76f (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:06:34 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev ce40611f-f508-4663-8754-efbef5a4f76f (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:06:34 localhost ceph-mgr[285904]: [progress INFO root] Completed event ce40611f-f508-4663-8754-efbef5a4f76f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 05:06:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 05:06:34 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 05:06:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:06:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:06:35 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:06:35 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:06:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:06:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:06:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:06:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:06:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 45 KiB/s wr, 2 op/s Feb 23 05:06:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:35 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 05:06:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 05:06:35 localhost nova_compute[280321]: 2026-02-23 10:06:35.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:06:35 localhost nova_compute[280321]: 2026-02-23 10:06:35.898 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:35 localhost nova_compute[280321]: 2026-02-23 10:06:35.900 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:35 localhost nova_compute[280321]: 2026-02-23 10:06:35.900 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:35 localhost nova_compute[280321]: 2026-02-23 10:06:35.901 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:35 localhost nova_compute[280321]: 2026-02-23 10:06:35.937 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:35 localhost nova_compute[280321]: 2026-02-23 10:06:35.938 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:36 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:06:36 localhost nova_compute[280321]: 2026-02-23 10:06:36.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:06:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 62 KiB/s wr, 4 op/s Feb 23 05:06:37 localhost nova_compute[280321]: 2026-02-23 10:06:37.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:06:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 38 KiB/s wr, 3 op/s Feb 23 05:06:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:40 localhost nova_compute[280321]: 2026-02-23 10:06:40.938 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:40 localhost nova_compute[280321]: 2026-02-23 10:06:40.940 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:40 localhost nova_compute[280321]: 2026-02-23 10:06:40.940 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:40 localhost nova_compute[280321]: 2026-02-23 10:06:40.941 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:40 localhost nova_compute[280321]: 2026-02-23 10:06:40.979 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:40 localhost nova_compute[280321]: 2026-02-23 10:06:40.980 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 43 KiB/s wr, 4 op/s Feb 23 05:06:42 localhost podman[241086]: time="2026-02-23T10:06:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:06:42 localhost podman[241086]: @ - - [23/Feb/2026:10:06:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:06:42 localhost podman[241086]: @ - - [23/Feb/2026:10:06:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17823 "" "Go-http-client/1.1" Feb 23 05:06:43 localhost sshd[327514]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:06:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 22 KiB/s wr, 2 op/s Feb 23 05:06:43 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bb91ee88-1d36-4aac-91eb-1313f2ece1d9", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:06:43 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:43 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bb91ee88-1d36-4aac-91eb-1313f2ece1d9/.meta.tmp' Feb 23 05:06:43 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bb91ee88-1d36-4aac-91eb-1313f2ece1d9/.meta.tmp' to config b'/volumes/_nogroup/bb91ee88-1d36-4aac-91eb-1313f2ece1d9/.meta' Feb 23 05:06:43 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:43 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bb91ee88-1d36-4aac-91eb-1313f2ece1d9", "format": "json"}]: dispatch Feb 23 05:06:43 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3c02aa0e-f4f3-4457-b672-3178b42295fb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:06:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:06:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:06:44 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c02aa0e-f4f3-4457-b672-3178b42295fb/.meta.tmp' Feb 23 05:06:44 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c02aa0e-f4f3-4457-b672-3178b42295fb/.meta.tmp' to config b'/volumes/_nogroup/3c02aa0e-f4f3-4457-b672-3178b42295fb/.meta' Feb 23 05:06:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:44 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3c02aa0e-f4f3-4457-b672-3178b42295fb", "format": "json"}]: dispatch Feb 23 05:06:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:44 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:45 localhost podman[327516]: 2026-02-23 10:06:45.018811447 +0000 UTC m=+0.090640664 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 05:06:45 localhost podman[327517]: 2026-02-23 10:06:45.066927998 +0000 UTC m=+0.134953868 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1770267347, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 23 05:06:45 localhost podman[327516]: 2026-02-23 10:06:45.083364201 +0000 UTC m=+0.155193418 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 05:06:45 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:06:45 localhost podman[327517]: 2026-02-23 10:06:45.134050061 +0000 UTC m=+0.202075941 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, release=1770267347, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 23 05:06:45 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:06:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 22 KiB/s wr, 2 op/s Feb 23 05:06:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:45 localhost nova_compute[280321]: 2026-02-23 10:06:45.981 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:45 localhost nova_compute[280321]: 2026-02-23 10:06:45.983 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:45 localhost nova_compute[280321]: 2026-02-23 10:06:45.984 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:45 localhost nova_compute[280321]: 2026-02-23 10:06:45.984 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:46 localhost nova_compute[280321]: 2026-02-23 10:06:46.020 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:46 localhost nova_compute[280321]: 2026-02-23 10:06:46.021 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:47 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bb91ee88-1d36-4aac-91eb-1313f2ece1d9", "snap_name": "9064a32e-718f-4f83-9c53-c0c3061d4e6f", "format": "json"}]: dispatch Feb 23 05:06:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9064a32e-718f-4f83-9c53-c0c3061d4e6f, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:47 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9064a32e-718f-4f83-9c53-c0c3061d4e6f, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 38 KiB/s wr, 3 op/s Feb 23 05:06:48 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3c02aa0e-f4f3-4457-b672-3178b42295fb", "snap_name": "522218e6-6786-4e43-9e8b-ad59b96bf4ec", "format": "json"}]: dispatch Feb 23 05:06:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:522218e6-6786-4e43-9e8b-ad59b96bf4ec, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:48 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:522218e6-6786-4e43-9e8b-ad59b96bf4ec, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:06:48.321 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:06:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:06:48.322 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:06:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:06:48.322 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:06:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s wr, 1 op/s Feb 23 05:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:06:50 localhost podman[327559]: 2026-02-23 10:06:50.003624092 +0000 UTC m=+0.077013147 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 05:06:50 localhost podman[327559]: 2026-02-23 10:06:50.10328832 +0000 UTC m=+0.176677355 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 23 05:06:50 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:06:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bb91ee88-1d36-4aac-91eb-1313f2ece1d9", "snap_name": "9064a32e-718f-4f83-9c53-c0c3061d4e6f_e446a0bc-a113-46f8-af36-02f01906b501", "force": true, "format": "json"}]: dispatch Feb 23 05:06:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9064a32e-718f-4f83-9c53-c0c3061d4e6f_e446a0bc-a113-46f8-af36-02f01906b501, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bb91ee88-1d36-4aac-91eb-1313f2ece1d9/.meta.tmp' Feb 23 05:06:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bb91ee88-1d36-4aac-91eb-1313f2ece1d9/.meta.tmp' to config b'/volumes/_nogroup/bb91ee88-1d36-4aac-91eb-1313f2ece1d9/.meta' Feb 23 05:06:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9064a32e-718f-4f83-9c53-c0c3061d4e6f_e446a0bc-a113-46f8-af36-02f01906b501, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bb91ee88-1d36-4aac-91eb-1313f2ece1d9", "snap_name": "9064a32e-718f-4f83-9c53-c0c3061d4e6f", "force": true, "format": "json"}]: dispatch Feb 23 05:06:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9064a32e-718f-4f83-9c53-c0c3061d4e6f, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bb91ee88-1d36-4aac-91eb-1313f2ece1d9/.meta.tmp' Feb 23 05:06:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bb91ee88-1d36-4aac-91eb-1313f2ece1d9/.meta.tmp' to config b'/volumes/_nogroup/bb91ee88-1d36-4aac-91eb-1313f2ece1d9/.meta' Feb 23 05:06:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9064a32e-718f-4f83-9c53-c0c3061d4e6f, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:50 localhost sshd[327584]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:06:51 localhost nova_compute[280321]: 2026-02-23 10:06:51.021 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:51 localhost nova_compute[280321]: 2026-02-23 10:06:51.023 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:51 localhost nova_compute[280321]: 2026-02-23 10:06:51.024 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:51 localhost nova_compute[280321]: 2026-02-23 10:06:51.024 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:51 localhost nova_compute[280321]: 2026-02-23 10:06:51.024 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:51 localhost nova_compute[280321]: 2026-02-23 10:06:51.025 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 43 KiB/s wr, 3 op/s Feb 23 05:06:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3c02aa0e-f4f3-4457-b672-3178b42295fb", "snap_name": "522218e6-6786-4e43-9e8b-ad59b96bf4ec_ff4ad49f-c9ab-46b4-916a-7d71df69fd70", "force": true, "format": "json"}]: dispatch Feb 23 05:06:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:522218e6-6786-4e43-9e8b-ad59b96bf4ec_ff4ad49f-c9ab-46b4-916a-7d71df69fd70, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c02aa0e-f4f3-4457-b672-3178b42295fb/.meta.tmp' Feb 23 05:06:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c02aa0e-f4f3-4457-b672-3178b42295fb/.meta.tmp' to config b'/volumes/_nogroup/3c02aa0e-f4f3-4457-b672-3178b42295fb/.meta' Feb 23 05:06:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:522218e6-6786-4e43-9e8b-ad59b96bf4ec_ff4ad49f-c9ab-46b4-916a-7d71df69fd70, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:51 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3c02aa0e-f4f3-4457-b672-3178b42295fb", "snap_name": "522218e6-6786-4e43-9e8b-ad59b96bf4ec", "force": true, "format": "json"}]: dispatch Feb 23 05:06:51 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:522218e6-6786-4e43-9e8b-ad59b96bf4ec, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c02aa0e-f4f3-4457-b672-3178b42295fb/.meta.tmp' Feb 23 05:06:51 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c02aa0e-f4f3-4457-b672-3178b42295fb/.meta.tmp' to config b'/volumes/_nogroup/3c02aa0e-f4f3-4457-b672-3178b42295fb/.meta' Feb 23 05:06:52 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:522218e6-6786-4e43-9e8b-ad59b96bf4ec, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s wr, 3 op/s Feb 23 05:06:53 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bb91ee88-1d36-4aac-91eb-1313f2ece1d9", "format": "json"}]: dispatch Feb 23 05:06:53 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:53 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:53 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:06:53.774+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bb91ee88-1d36-4aac-91eb-1313f2ece1d9' of type subvolume Feb 23 05:06:53 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bb91ee88-1d36-4aac-91eb-1313f2ece1d9' of type subvolume Feb 23 05:06:53 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bb91ee88-1d36-4aac-91eb-1313f2ece1d9", "force": true, "format": "json"}]: dispatch Feb 23 05:06:53 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:53 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bb91ee88-1d36-4aac-91eb-1313f2ece1d9'' moved to trashcan Feb 23 05:06:53 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:06:53 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bb91ee88-1d36-4aac-91eb-1313f2ece1d9, vol_name:cephfs) < "" Feb 23 05:06:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e249 e249: 6 total, 6 up, 6 in Feb 23 05:06:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 46 KiB/s wr, 3 op/s Feb 23 05:06:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:06:56 localhost nova_compute[280321]: 2026-02-23 10:06:56.026 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:56 localhost nova_compute[280321]: 2026-02-23 10:06:56.028 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:06:56 localhost nova_compute[280321]: 2026-02-23 10:06:56.028 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:06:56 localhost nova_compute[280321]: 2026-02-23 10:06:56.028 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:56 localhost nova_compute[280321]: 2026-02-23 10:06:56.060 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:06:56 localhost nova_compute[280321]: 2026-02-23 10:06:56.060 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:06:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:06:56 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3c02aa0e-f4f3-4457-b672-3178b42295fb", "format": "json"}]: dispatch Feb 23 05:06:56 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:56 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:06:56 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:06:56.939+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c02aa0e-f4f3-4457-b672-3178b42295fb' of type subvolume Feb 23 05:06:56 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c02aa0e-f4f3-4457-b672-3178b42295fb' of type subvolume Feb 23 05:06:56 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3c02aa0e-f4f3-4457-b672-3178b42295fb", "force": true, "format": "json"}]: dispatch Feb 23 05:06:56 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:56 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3c02aa0e-f4f3-4457-b672-3178b42295fb'' moved to trashcan Feb 23 05:06:56 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:06:56 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c02aa0e-f4f3-4457-b672-3178b42295fb, vol_name:cephfs) < "" Feb 23 05:06:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 53 KiB/s wr, 5 op/s Feb 23 05:06:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2b92e732-df0d-4744-9403-d9d9479c95ae", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:06:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:06:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2b92e732-df0d-4744-9403-d9d9479c95ae/.meta.tmp' Feb 23 05:06:57 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2b92e732-df0d-4744-9403-d9d9479c95ae/.meta.tmp' to config b'/volumes/_nogroup/2b92e732-df0d-4744-9403-d9d9479c95ae/.meta' Feb 23 05:06:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:06:57 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2b92e732-df0d-4744-9403-d9d9479c95ae", "format": "json"}]: dispatch Feb 23 05:06:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:06:57 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:06:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:06:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:06:58 localhost systemd[1]: tmp-crun.aFww1N.mount: Deactivated successfully. Feb 23 05:06:58 localhost podman[327586]: 2026-02-23 10:06:58.020687496 +0000 UTC m=+0.090265632 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 23 05:06:58 localhost podman[327586]: 2026-02-23 10:06:58.026600866 +0000 UTC m=+0.096179002 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260216, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 23 05:06:58 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:06:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e250 e250: 6 total, 6 up, 6 in Feb 23 05:06:58 localhost podman[327587]: 2026-02-23 10:06:58.107109209 +0000 UTC m=+0.173799507 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 05:06:58 localhost podman[327587]: 2026-02-23 10:06:58.118867458 +0000 UTC m=+0.185557826 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS) Feb 23 05:06:58 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:06:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 32 KiB/s wr, 4 op/s Feb 23 05:07:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa421a79-c1f6-4044-80a5-096747f89a69", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:07:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa421a79-c1f6-4044-80a5-096747f89a69, vol_name:cephfs) < "" Feb 23 05:07:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa421a79-c1f6-4044-80a5-096747f89a69/.meta.tmp' Feb 23 05:07:00 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa421a79-c1f6-4044-80a5-096747f89a69/.meta.tmp' to config b'/volumes/_nogroup/aa421a79-c1f6-4044-80a5-096747f89a69/.meta' Feb 23 05:07:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa421a79-c1f6-4044-80a5-096747f89a69, vol_name:cephfs) < "" Feb 23 05:07:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa421a79-c1f6-4044-80a5-096747f89a69", "format": "json"}]: dispatch Feb 23 05:07:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa421a79-c1f6-4044-80a5-096747f89a69, vol_name:cephfs) < "" Feb 23 05:07:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa421a79-c1f6-4044-80a5-096747f89a69, vol_name:cephfs) < "" Feb 23 05:07:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:00 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "2b92e732-df0d-4744-9403-d9d9479c95ae", "snap_name": "c8d432f8-6b3a-45b5-8455-1fb016c15c44", "format": "json"}]: dispatch Feb 23 05:07:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c8d432f8-6b3a-45b5-8455-1fb016c15c44, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:07:00 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c8d432f8-6b3a-45b5-8455-1fb016c15c44, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:07:01 localhost nova_compute[280321]: 2026-02-23 10:07:01.061 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:01 localhost nova_compute[280321]: 2026-02-23 10:07:01.063 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:01 localhost nova_compute[280321]: 2026-02-23 10:07:01.063 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:07:01 localhost nova_compute[280321]: 2026-02-23 10:07:01.063 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:01 localhost nova_compute[280321]: 2026-02-23 10:07:01.093 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:01 localhost nova_compute[280321]: 2026-02-23 10:07:01.094 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 95 KiB/s wr, 8 op/s Feb 23 05:07:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:07:01 localhost openstack_network_exporter[243519]: ERROR 10:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:07:01 localhost openstack_network_exporter[243519]: Feb 23 05:07:01 localhost openstack_network_exporter[243519]: ERROR 10:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:07:01 localhost openstack_network_exporter[243519]: Feb 23 05:07:02 localhost podman[327625]: 2026-02-23 10:07:02.009096216 +0000 UTC m=+0.080557824 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 05:07:02 localhost podman[327625]: 2026-02-23 10:07:02.020882967 +0000 UTC m=+0.092344565 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 05:07:02 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:07:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 984 B/s rd, 92 KiB/s wr, 8 op/s Feb 23 05:07:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa421a79-c1f6-4044-80a5-096747f89a69", "format": "json"}]: dispatch Feb 23 05:07:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aa421a79-c1f6-4044-80a5-096747f89a69, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aa421a79-c1f6-4044-80a5-096747f89a69, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:03 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:03.606+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa421a79-c1f6-4044-80a5-096747f89a69' of type subvolume Feb 23 05:07:03 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa421a79-c1f6-4044-80a5-096747f89a69' of type subvolume Feb 23 05:07:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa421a79-c1f6-4044-80a5-096747f89a69", "force": true, "format": "json"}]: dispatch Feb 23 05:07:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa421a79-c1f6-4044-80a5-096747f89a69, vol_name:cephfs) < "" Feb 23 05:07:03 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aa421a79-c1f6-4044-80a5-096747f89a69'' moved to trashcan Feb 23 05:07:03 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:07:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa421a79-c1f6-4044-80a5-096747f89a69, vol_name:cephfs) < "" Feb 23 05:07:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2b92e732-df0d-4744-9403-d9d9479c95ae", "snap_name": "c8d432f8-6b3a-45b5-8455-1fb016c15c44_0e17a8dc-e179-4a46-95d3-704480393090", "force": true, "format": "json"}]: dispatch Feb 23 05:07:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8d432f8-6b3a-45b5-8455-1fb016c15c44_0e17a8dc-e179-4a46-95d3-704480393090, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:07:04 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2b92e732-df0d-4744-9403-d9d9479c95ae/.meta.tmp' Feb 23 05:07:04 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2b92e732-df0d-4744-9403-d9d9479c95ae/.meta.tmp' to config b'/volumes/_nogroup/2b92e732-df0d-4744-9403-d9d9479c95ae/.meta' Feb 23 05:07:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8d432f8-6b3a-45b5-8455-1fb016c15c44_0e17a8dc-e179-4a46-95d3-704480393090, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:07:04 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2b92e732-df0d-4744-9403-d9d9479c95ae", "snap_name": "c8d432f8-6b3a-45b5-8455-1fb016c15c44", "force": true, "format": "json"}]: dispatch Feb 23 05:07:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8d432f8-6b3a-45b5-8455-1fb016c15c44, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:07:04 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2b92e732-df0d-4744-9403-d9d9479c95ae/.meta.tmp' Feb 23 05:07:04 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2b92e732-df0d-4744-9403-d9d9479c95ae/.meta.tmp' to config b'/volumes/_nogroup/2b92e732-df0d-4744-9403-d9d9479c95ae/.meta' Feb 23 05:07:04 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8d432f8-6b3a-45b5-8455-1fb016c15c44, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:07:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_10:07:05 Feb 23 05:07:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 05:07:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 05:07:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['backups', 'volumes', '.mgr', 'manila_data', 'vms', 'images', 'manila_metadata'] Feb 23 05:07:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 05:07:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:07:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:07:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:07:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:07:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:07:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:07:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 76 KiB/s wr, 6 op/s Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014869268216080402 of space, bias 1.0, pg target 0.2968897220477387 quantized to 32 (current 32) Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010850694444444444 quantized to 32 (current 32) Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:07:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0020744455785873814 of space, bias 4.0, pg target 1.6512586805555556 quantized to 16 (current 16) Feb 23 05:07:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 05:07:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:07:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 05:07:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:07:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:07:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:07:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:07:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:07:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:07:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:07:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:06 localhost nova_compute[280321]: 2026-02-23 10:07:06.095 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:06 localhost nova_compute[280321]: 2026-02-23 10:07:06.097 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:06 localhost nova_compute[280321]: 2026-02-23 10:07:06.097 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:07:06 localhost nova_compute[280321]: 2026-02-23 10:07:06.097 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:06 localhost nova_compute[280321]: 2026-02-23 10:07:06.127 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:06 localhost nova_compute[280321]: 2026-02-23 10:07:06.128 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "04d2d886-4cf3-41da-bebe-433b0519fd88", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta.tmp' Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta.tmp' to config b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta' Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "04d2d886-4cf3-41da-bebe-433b0519fd88", "format": "json"}]: dispatch Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 70 KiB/s wr, 5 op/s Feb 23 05:07:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2b92e732-df0d-4744-9403-d9d9479c95ae", "format": "json"}]: dispatch Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2b92e732-df0d-4744-9403-d9d9479c95ae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2b92e732-df0d-4744-9403-d9d9479c95ae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:07 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:07.639+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2b92e732-df0d-4744-9403-d9d9479c95ae' of type subvolume Feb 23 05:07:07 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2b92e732-df0d-4744-9403-d9d9479c95ae' of type subvolume Feb 23 05:07:07 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2b92e732-df0d-4744-9403-d9d9479c95ae", "force": true, "format": "json"}]: dispatch Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2b92e732-df0d-4744-9403-d9d9479c95ae'' moved to trashcan Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:07:07 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2b92e732-df0d-4744-9403-d9d9479c95ae, vol_name:cephfs) < "" Feb 23 05:07:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 360 B/s rd, 62 KiB/s wr, 4 op/s Feb 23 05:07:10 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "04d2d886-4cf3-41da-bebe-433b0519fd88", "snap_name": "196dffb9-7630-47d7-9de4-5970b7eeedd7", "format": "json"}]: dispatch Feb 23 05:07:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:196dffb9-7630-47d7-9de4-5970b7eeedd7, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e251 e251: 6 total, 6 up, 6 in Feb 23 05:07:10 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:196dffb9-7630-47d7-9de4-5970b7eeedd7, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:11 localhost nova_compute[280321]: 2026-02-23 10:07:11.129 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:11 localhost nova_compute[280321]: 2026-02-23 10:07:11.131 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:11 localhost nova_compute[280321]: 2026-02-23 10:07:11.131 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:07:11 localhost nova_compute[280321]: 2026-02-23 10:07:11.131 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:11 localhost nova_compute[280321]: 2026-02-23 10:07:11.168 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:11 localhost nova_compute[280321]: 2026-02-23 10:07:11.168 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 68 KiB/s wr, 5 op/s Feb 23 05:07:12 localhost ovn_metadata_agent[161837]: 2026-02-23 10:07:12.255 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:07:12 localhost ovn_metadata_agent[161837]: 2026-02-23 10:07:12.256 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 05:07:12 localhost nova_compute[280321]: 2026-02-23 10:07:12.267 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:12 localhost podman[241086]: time="2026-02-23T10:07:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:07:12 localhost podman[241086]: @ - - [23/Feb/2026:10:07:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:07:12 localhost podman[241086]: @ - - [23/Feb/2026:10:07:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17830 "" "Go-http-client/1.1" Feb 23 05:07:12 localhost sshd[327648]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:07:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 68 KiB/s wr, 5 op/s Feb 23 05:07:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "04d2d886-4cf3-41da-bebe-433b0519fd88", "snap_name": "196dffb9-7630-47d7-9de4-5970b7eeedd7", "target_sub_name": "3b946c8c-2581-45e9-b8f6-ec1026ed5801", "format": "json"}]: dispatch Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:196dffb9-7630-47d7-9de4-5970b7eeedd7, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, target_sub_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, vol_name:cephfs) < "" Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta.tmp' Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta.tmp' to config b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta' Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 5e22c898-d938-4116-b6dc-7d630f75bd8d for path b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801' Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta.tmp' Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta.tmp' to config b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta' Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:196dffb9-7630-47d7-9de4-5970b7eeedd7, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, target_sub_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, vol_name:cephfs) < "" Feb 23 05:07:13 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3b946c8c-2581-45e9-b8f6-ec1026ed5801", "format": "json"}]: dispatch Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:13.855+0000 7fc3bf4b7640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:13.855+0000 7fc3bf4b7640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:13.855+0000 7fc3bf4b7640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:13.855+0000 7fc3bf4b7640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:13.855+0000 7fc3bf4b7640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801 Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 3b946c8c-2581-45e9-b8f6-ec1026ed5801) Feb 23 05:07:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:13.885+0000 7fc3c04b9640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:13.885+0000 7fc3c04b9640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:13.885+0000 7fc3c04b9640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:13.885+0000 7fc3c04b9640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:13.885+0000 7fc3c04b9640 -1 client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: client.0 error registering admin socket command: (17) File exists Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 3b946c8c-2581-45e9-b8f6-ec1026ed5801) -- by 0 seconds Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta.tmp' Feb 23 05:07:13 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta.tmp' to config b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta' Feb 23 05:07:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.snap/196dffb9-7630-47d7-9de4-5970b7eeedd7/be0d99d2-56e7-43e2-a16f-90a46d65566d' to b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/f90bd127-b595-4d69-8041-08eb85284ae9' Feb 23 05:07:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta.tmp' Feb 23 05:07:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta.tmp' to config b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta' Feb 23 05:07:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.clone_index] untracking 5e22c898-d938-4116-b6dc-7d630f75bd8d Feb 23 05:07:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta.tmp' Feb 23 05:07:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta.tmp' to config b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta' Feb 23 05:07:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta.tmp' Feb 23 05:07:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta.tmp' to config b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801/.meta' Feb 23 05:07:15 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 3b946c8c-2581-45e9-b8f6-ec1026ed5801) Feb 23 05:07:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 68 KiB/s wr, 5 op/s Feb 23 05:07:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:07:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:07:16 localhost podman[327674]: 2026-02-23 10:07:16.011374125 +0000 UTC m=+0.085172997 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 05:07:16 localhost podman[327675]: 2026-02-23 10:07:16.054306928 +0000 UTC m=+0.128367867 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.7, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter) Feb 23 05:07:16 localhost podman[327675]: 2026-02-23 10:07:16.068816501 +0000 UTC m=+0.142877440 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, version=9.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 23 05:07:16 localhost podman[327674]: 2026-02-23 10:07:16.079028674 +0000 UTC m=+0.152827536 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 05:07:16 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:07:16 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:07:16 localhost nova_compute[280321]: 2026-02-23 10:07:16.202 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:17 localhost ovn_metadata_agent[161837]: 2026-02-23 10:07:17.258 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 05:07:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 78 KiB/s wr, 6 op/s Feb 23 05:07:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 23 05:07:17 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/657266939' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 23 05:07:17 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 23 05:07:17 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/657266939' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 23 05:07:17 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3b946c8c-2581-45e9-b8f6-ec1026ed5801", "format": "json"}]: dispatch Feb 23 05:07:17 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:18 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e252 e252: 6 total, 6 up, 6 in Feb 23 05:07:18 localhost sshd[327717]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:07:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 795 B/s rd, 87 KiB/s wr, 7 op/s Feb 23 05:07:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3b946c8c-2581-45e9-b8f6-ec1026ed5801", "format": "json"}]: dispatch Feb 23 05:07:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, vol_name:cephfs) < "" Feb 23 05:07:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, vol_name:cephfs) < "" Feb 23 05:07:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 23 05:07:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:20 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:20 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:20 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "format": "json"}]: dispatch Feb 23 05:07:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:20 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:07:21 localhost systemd[1]: tmp-crun.7wQIXn.mount: Deactivated successfully. Feb 23 05:07:21 localhost podman[327719]: 2026-02-23 10:07:21.008604889 +0000 UTC m=+0.083870936 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:07:21 localhost podman[327719]: 2026-02-23 10:07:21.083943103 +0000 UTC m=+0.159209190 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 23 05:07:21 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:07:21 localhost nova_compute[280321]: 2026-02-23 10:07:21.204 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 62 KiB/s wr, 6 op/s Feb 23 05:07:21 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "4660794f-8745-4cec-b528-d5d739724996", "format": "json"}]: dispatch Feb 23 05:07:21 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4660794f-8745-4cec-b528-d5d739724996, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:21 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4660794f-8745-4cec-b528-d5d739724996, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:23 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3b946c8c-2581-45e9-b8f6-ec1026ed5801", "format": "json"}]: dispatch Feb 23 05:07:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 63 KiB/s wr, 6 op/s Feb 23 05:07:23 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3b946c8c-2581-45e9-b8f6-ec1026ed5801", "force": true, "format": "json"}]: dispatch Feb 23 05:07:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, vol_name:cephfs) < "" Feb 23 05:07:23 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3b946c8c-2581-45e9-b8f6-ec1026ed5801'' moved to trashcan Feb 23 05:07:23 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:07:23 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3b946c8c-2581-45e9-b8f6-ec1026ed5801, vol_name:cephfs) < "" Feb 23 05:07:25 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "dc7adc7f-ef4c-4373-9a38-3f5ab1a74e0b", "format": "json"}]: dispatch Feb 23 05:07:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:dc7adc7f-ef4c-4373-9a38-3f5ab1a74e0b, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:25 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:dc7adc7f-ef4c-4373-9a38-3f5ab1a74e0b, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 63 KiB/s wr, 6 op/s Feb 23 05:07:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:26 localhost nova_compute[280321]: 2026-02-23 10:07:26.208 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:26 localhost nova_compute[280321]: 2026-02-23 10:07:26.210 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:26 localhost nova_compute[280321]: 2026-02-23 10:07:26.210 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:07:26 localhost nova_compute[280321]: 2026-02-23 10:07:26.210 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:26 localhost nova_compute[280321]: 2026-02-23 10:07:26.237 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:26 localhost nova_compute[280321]: 2026-02-23 10:07:26.237 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:26 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "04d2d886-4cf3-41da-bebe-433b0519fd88", "snap_name": "196dffb9-7630-47d7-9de4-5970b7eeedd7_0eaf20cf-e6ae-41ce-b2d6-53edd51a51b8", "force": true, "format": "json"}]: dispatch Feb 23 05:07:26 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:196dffb9-7630-47d7-9de4-5970b7eeedd7_0eaf20cf-e6ae-41ce-b2d6-53edd51a51b8, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 59 KiB/s wr, 4 op/s Feb 23 05:07:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:07:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:07:29 localhost podman[327743]: 2026-02-23 10:07:29.010368195 +0000 UTC m=+0.081410510 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible) Feb 23 05:07:29 localhost podman[327743]: 2026-02-23 10:07:29.015473062 +0000 UTC m=+0.086515377 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:07:29 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:07:29 localhost systemd[1]: tmp-crun.NpUiaR.mount: Deactivated successfully. Feb 23 05:07:29 localhost podman[327744]: 2026-02-23 10:07:29.066174782 +0000 UTC m=+0.134589747 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Feb 23 05:07:29 localhost podman[327744]: 2026-02-23 10:07:29.076342384 +0000 UTC m=+0.144757339 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:07:29 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:07:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 541 B/s rd, 52 KiB/s wr, 4 op/s Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta.tmp' Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta.tmp' to config b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta' Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:196dffb9-7630-47d7-9de4-5970b7eeedd7_0eaf20cf-e6ae-41ce-b2d6-53edd51a51b8, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "04d2d886-4cf3-41da-bebe-433b0519fd88", "snap_name": "196dffb9-7630-47d7-9de4-5970b7eeedd7", "force": true, "format": "json"}]: dispatch Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:196dffb9-7630-47d7-9de4-5970b7eeedd7, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta.tmp' Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta.tmp' to config b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88/.meta' Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:196dffb9-7630-47d7-9de4-5970b7eeedd7, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "dc7adc7f-ef4c-4373-9a38-3f5ab1a74e0b_cf1c99c7-561b-4f60-8c3a-b48fb1569a95", "force": true, "format": "json"}]: dispatch Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc7adc7f-ef4c-4373-9a38-3f5ab1a74e0b_cf1c99c7-561b-4f60-8c3a-b48fb1569a95, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc7adc7f-ef4c-4373-9a38-3f5ab1a74e0b_cf1c99c7-561b-4f60-8c3a-b48fb1569a95, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:30 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "dc7adc7f-ef4c-4373-9a38-3f5ab1a74e0b", "force": true, "format": "json"}]: dispatch Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc7adc7f-ef4c-4373-9a38-3f5ab1a74e0b, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:30 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc7adc7f-ef4c-4373-9a38-3f5ab1a74e0b, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:30 localhost nova_compute[280321]: 2026-02-23 10:07:30.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:07:31 localhost nova_compute[280321]: 2026-02-23 10:07:31.237 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:31 localhost nova_compute[280321]: 2026-02-23 10:07:31.239 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:31 localhost nova_compute[280321]: 2026-02-23 10:07:31.239 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:07:31 localhost nova_compute[280321]: 2026-02-23 10:07:31.239 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:31 localhost nova_compute[280321]: 2026-02-23 10:07:31.270 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:31 localhost nova_compute[280321]: 2026-02-23 10:07:31.270 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 90 KiB/s wr, 6 op/s Feb 23 05:07:31 localhost openstack_network_exporter[243519]: ERROR 10:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:07:31 localhost openstack_network_exporter[243519]: Feb 23 05:07:31 localhost openstack_network_exporter[243519]: ERROR 10:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:07:31 localhost openstack_network_exporter[243519]: Feb 23 05:07:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "86119af1-6e98-43f4-a09a-0b1717aea16f", "format": "json"}]: dispatch Feb 23 05:07:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:86119af1-6e98-43f4-a09a-0b1717aea16f, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:86119af1-6e98-43f4-a09a-0b1717aea16f, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "04d2d886-4cf3-41da-bebe-433b0519fd88", "format": "json"}]: dispatch Feb 23 05:07:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:04d2d886-4cf3-41da-bebe-433b0519fd88, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:04d2d886-4cf3-41da-bebe-433b0519fd88, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:07:32 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:07:32.808+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '04d2d886-4cf3-41da-bebe-433b0519fd88' of type subvolume Feb 23 05:07:32 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '04d2d886-4cf3-41da-bebe-433b0519fd88' of type subvolume Feb 23 05:07:32 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "04d2d886-4cf3-41da-bebe-433b0519fd88", "force": true, "format": "json"}]: dispatch Feb 23 05:07:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:32 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/04d2d886-4cf3-41da-bebe-433b0519fd88'' moved to trashcan Feb 23 05:07:32 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:07:32 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:04d2d886-4cf3-41da-bebe-433b0519fd88, vol_name:cephfs) < "" Feb 23 05:07:32 localhost nova_compute[280321]: 2026-02-23 10:07:32.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:07:32 localhost nova_compute[280321]: 2026-02-23 10:07:32.891 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 05:07:32 localhost nova_compute[280321]: 2026-02-23 10:07:32.892 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 05:07:32 localhost nova_compute[280321]: 2026-02-23 10:07:32.917 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 05:07:32 localhost nova_compute[280321]: 2026-02-23 10:07:32.917 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:07:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:07:32 localhost nova_compute[280321]: 2026-02-23 10:07:32.937 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:07:32 localhost nova_compute[280321]: 2026-02-23 10:07:32.937 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:07:32 localhost nova_compute[280321]: 2026-02-23 10:07:32.938 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:07:32 localhost nova_compute[280321]: 2026-02-23 10:07:32.938 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 05:07:32 localhost nova_compute[280321]: 2026-02-23 10:07:32.939 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:07:33 localhost systemd[1]: tmp-crun.LOOwhc.mount: Deactivated successfully. Feb 23 05:07:33 localhost podman[327779]: 2026-02-23 10:07:33.02475388 +0000 UTC m=+0.096372808 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 05:07:33 localhost podman[327779]: 2026-02-23 10:07:33.03685589 +0000 UTC m=+0.108474808 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 05:07:33 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:07:33 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:07:33 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/529488543' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:07:33 localhost nova_compute[280321]: 2026-02-23 10:07:33.399 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:07:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 64 KiB/s wr, 4 op/s Feb 23 05:07:33 localhost nova_compute[280321]: 2026-02-23 10:07:33.582 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 05:07:33 localhost nova_compute[280321]: 2026-02-23 10:07:33.584 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11538MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 05:07:33 localhost nova_compute[280321]: 2026-02-23 10:07:33.584 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:07:33 localhost nova_compute[280321]: 2026-02-23 10:07:33.584 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:07:33 localhost nova_compute[280321]: 2026-02-23 10:07:33.641 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 05:07:33 localhost nova_compute[280321]: 2026-02-23 10:07:33.642 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 05:07:33 localhost nova_compute[280321]: 2026-02-23 10:07:33.662 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:07:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:07:34 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3036817684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:07:34 localhost nova_compute[280321]: 2026-02-23 10:07:34.118 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:07:34 localhost nova_compute[280321]: 2026-02-23 10:07:34.124 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 05:07:34 localhost nova_compute[280321]: 2026-02-23 10:07:34.141 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 05:07:34 localhost nova_compute[280321]: 2026-02-23 10:07:34.143 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 05:07:34 localhost nova_compute[280321]: 2026-02-23 10:07:34.144 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:07:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:07:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:07:35 localhost nova_compute[280321]: 2026-02-23 10:07:35.118 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:07:35 localhost nova_compute[280321]: 2026-02-23 10:07:35.118 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:07:35 localhost nova_compute[280321]: 2026-02-23 10:07:35.119 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:07:35 localhost nova_compute[280321]: 2026-02-23 10:07:35.119 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 05:07:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:07:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:07:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:07:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:07:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 05:07:35 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 05:07:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 05:07:35 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:07:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 05:07:35 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev c0176c50-00e5-4941-844e-c72548896cf1 (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:07:35 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev c0176c50-00e5-4941-844e-c72548896cf1 (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:07:35 localhost ceph-mgr[285904]: [progress INFO root] Completed event c0176c50-00e5-4941-844e-c72548896cf1 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 05:07:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 05:07:35 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 05:07:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 63 KiB/s wr, 3 op/s Feb 23 05:07:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e253 e253: 6 total, 6 up, 6 in Feb 23 05:07:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:35 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:07:35 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:07:35 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 05:07:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 05:07:36 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "86119af1-6e98-43f4-a09a-0b1717aea16f_c09c305a-0055-4b92-9393-c17a5f36cbfe", "force": true, "format": "json"}]: dispatch Feb 23 05:07:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:86119af1-6e98-43f4-a09a-0b1717aea16f_c09c305a-0055-4b92-9393-c17a5f36cbfe, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:36 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:36 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:86119af1-6e98-43f4-a09a-0b1717aea16f_c09c305a-0055-4b92-9393-c17a5f36cbfe, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:36 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "86119af1-6e98-43f4-a09a-0b1717aea16f", "force": true, "format": "json"}]: dispatch Feb 23 05:07:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:86119af1-6e98-43f4-a09a-0b1717aea16f, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:36 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:36 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:36 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:86119af1-6e98-43f4-a09a-0b1717aea16f, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:36 localhost nova_compute[280321]: 2026-02-23 10:07:36.312 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:36 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:07:36 localhost nova_compute[280321]: 2026-02-23 10:07:36.888 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:07:36 localhost nova_compute[280321]: 2026-02-23 10:07:36.908 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:07:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 86 KiB/s wr, 6 op/s Feb 23 05:07:37 localhost nova_compute[280321]: 2026-02-23 10:07:37.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:07:38 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e254 e254: 6 total, 6 up, 6 in Feb 23 05:07:38 localhost nova_compute[280321]: 2026-02-23 10:07:38.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:07:39 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "d873c3c1-7f9f-4457-b00a-3b07bd8d3452", "format": "json"}]: dispatch Feb 23 05:07:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d873c3c1-7f9f-4457-b00a-3b07bd8d3452, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:39 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d873c3c1-7f9f-4457-b00a-3b07bd8d3452, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 46 KiB/s wr, 5 op/s Feb 23 05:07:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e255 e255: 6 total, 6 up, 6 in Feb 23 05:07:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:41 localhost nova_compute[280321]: 2026-02-23 10:07:41.315 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:41 localhost nova_compute[280321]: 2026-02-23 10:07:41.318 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 118 KiB/s wr, 8 op/s Feb 23 05:07:42 localhost podman[241086]: time="2026-02-23T10:07:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:07:42 localhost podman[241086]: @ - - [23/Feb/2026:10:07:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:07:42 localhost podman[241086]: @ - - [23/Feb/2026:10:07:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17821 "" "Go-http-client/1.1" Feb 23 05:07:42 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "d873c3c1-7f9f-4457-b00a-3b07bd8d3452_4d13e6f8-69ff-4b98-871c-1442174b91a7", "force": true, "format": "json"}]: dispatch Feb 23 05:07:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d873c3c1-7f9f-4457-b00a-3b07bd8d3452_4d13e6f8-69ff-4b98-871c-1442174b91a7, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:42 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:42 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d873c3c1-7f9f-4457-b00a-3b07bd8d3452_4d13e6f8-69ff-4b98-871c-1442174b91a7, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:42 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "d873c3c1-7f9f-4457-b00a-3b07bd8d3452", "force": true, "format": "json"}]: dispatch Feb 23 05:07:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d873c3c1-7f9f-4457-b00a-3b07bd8d3452, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:42 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:42 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:42 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d873c3c1-7f9f-4457-b00a-3b07bd8d3452, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.0 KiB/s rd, 90 KiB/s wr, 7 op/s Feb 23 05:07:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 43 KiB/s wr, 2 op/s Feb 23 05:07:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:46 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "509d0785-aae6-4ab5-ae40-f641cfde0067", "format": "json"}]: dispatch Feb 23 05:07:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:509d0785-aae6-4ab5-ae40-f641cfde0067, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:46 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:509d0785-aae6-4ab5-ae40-f641cfde0067, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:46 localhost nova_compute[280321]: 2026-02-23 10:07:46.365 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:46 localhost nova_compute[280321]: 2026-02-23 10:07:46.366 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:46 localhost nova_compute[280321]: 2026-02-23 10:07:46.366 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5047 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:07:46 localhost nova_compute[280321]: 2026-02-23 10:07:46.367 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:46 localhost nova_compute[280321]: 2026-02-23 10:07:46.369 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:46 localhost nova_compute[280321]: 2026-02-23 10:07:46.369 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:07:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:07:47 localhost systemd[1]: tmp-crun.7cjnxv.mount: Deactivated successfully. Feb 23 05:07:47 localhost podman[327934]: 2026-02-23 10:07:47.012957297 +0000 UTC m=+0.083095331 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, version=9.7, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 23 05:07:47 localhost podman[327934]: 2026-02-23 10:07:47.027816472 +0000 UTC m=+0.097954546 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, release=1770267347, io.buildah.version=1.33.7, version=9.7, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=) Feb 23 05:07:47 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:07:47 localhost podman[327933]: 2026-02-23 10:07:47.113139771 +0000 UTC m=+0.186208075 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 23 05:07:47 localhost podman[327933]: 2026-02-23 10:07:47.126938374 +0000 UTC m=+0.200006638 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 05:07:47 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:07:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 439 B/s rd, 69 KiB/s wr, 4 op/s Feb 23 05:07:48 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e256 e256: 6 total, 6 up, 6 in Feb 23 05:07:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:07:48.322 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:07:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:07:48.323 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:07:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:07:48.323 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:07:49 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e257 e257: 6 total, 6 up, 6 in Feb 23 05:07:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 37 KiB/s wr, 2 op/s Feb 23 05:07:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "509d0785-aae6-4ab5-ae40-f641cfde0067_7f4a989f-639e-4dce-8f6e-7269130fc579", "force": true, "format": "json"}]: dispatch Feb 23 05:07:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:509d0785-aae6-4ab5-ae40-f641cfde0067_7f4a989f-639e-4dce-8f6e-7269130fc579, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:509d0785-aae6-4ab5-ae40-f641cfde0067_7f4a989f-639e-4dce-8f6e-7269130fc579, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:50 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "509d0785-aae6-4ab5-ae40-f641cfde0067", "force": true, "format": "json"}]: dispatch Feb 23 05:07:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:509d0785-aae6-4ab5-ae40-f641cfde0067, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:50 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:50 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 05:07:50 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4777 writes, 36K keys, 4777 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.05 MB/s#012Cumulative WAL: 4777 writes, 4777 syncs, 1.00 writes per sync, written: 0.06 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2474 writes, 13K keys, 2474 commit groups, 1.0 writes per commit group, ingest: 17.64 MB, 0.03 MB/s#012Interval WAL: 2474 writes, 2474 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 191.5 0.22 0.11 17 0.013 0 0 0.0 0.0#012 L6 1/0 17.50 MB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 6.6 225.3 207.3 1.32 0.76 16 0.083 211K 8293 0.0 0.0#012 Sum 1/0 17.50 MB 0.0 0.3 0.0 0.3 0.3 0.1 0.0 7.6 193.7 205.1 1.54 0.87 33 0.047 211K 8293 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 14.1 192.8 194.7 0.76 0.45 16 0.048 112K 4300 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 0.0 225.3 207.3 1.32 0.76 16 0.083 211K 8293 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 193.9 0.21 0.11 16 0.013 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.31 GB write, 0.26 MB/s write, 0.29 GB read, 0.25 MB/s read, 1.5 seconds#012Interval compaction: 0.14 GB write, 0.25 MB/s write, 0.14 GB read, 0.24 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564eb1551350#2 capacity: 304.00 MB usage: 24.82 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000168 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1376,23.43 MB,7.70778%) FilterBlock(33,614.98 KB,0.197556%) IndexBlock(33,803.89 KB,0.25824%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Feb 23 05:07:50 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:509d0785-aae6-4ab5-ae40-f641cfde0067, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:07:51 localhost podman[327976]: 2026-02-23 10:07:51.232218419 +0000 UTC m=+0.086060832 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Feb 23 05:07:51 localhost podman[327976]: 2026-02-23 10:07:51.320064397 +0000 UTC m=+0.173906850 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260216, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:07:51 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:07:51 localhost nova_compute[280321]: 2026-02-23 10:07:51.372 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 71 KiB/s wr, 4 op/s Feb 23 05:07:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 71 KiB/s wr, 4 op/s Feb 23 05:07:54 localhost sshd[328001]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:07:54 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "fbb57434-352b-4604-82ba-6268d7f75f30", "format": "json"}]: dispatch Feb 23 05:07:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fbb57434-352b-4604-82ba-6268d7f75f30, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:54 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fbb57434-352b-4604-82ba-6268d7f75f30, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e258 e258: 6 total, 6 up, 6 in Feb 23 05:07:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s wr, 2 op/s Feb 23 05:07:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:07:56 localhost nova_compute[280321]: 2026-02-23 10:07:56.374 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:56 localhost nova_compute[280321]: 2026-02-23 10:07:56.377 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:07:56 localhost nova_compute[280321]: 2026-02-23 10:07:56.377 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:07:56 localhost nova_compute[280321]: 2026-02-23 10:07:56.377 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:56 localhost nova_compute[280321]: 2026-02-23 10:07:56.407 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:07:56 localhost nova_compute[280321]: 2026-02-23 10:07:56.408 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:07:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 246 B/s rd, 51 KiB/s wr, 3 op/s Feb 23 05:07:58 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e259 e259: 6 total, 6 up, 6 in Feb 23 05:07:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "fbb57434-352b-4604-82ba-6268d7f75f30_d4ac2c99-63f7-4931-9b29-6a1792d5f0fd", "force": true, "format": "json"}]: dispatch Feb 23 05:07:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbb57434-352b-4604-82ba-6268d7f75f30_d4ac2c99-63f7-4931-9b29-6a1792d5f0fd, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbb57434-352b-4604-82ba-6268d7f75f30_d4ac2c99-63f7-4931-9b29-6a1792d5f0fd, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:58 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "fbb57434-352b-4604-82ba-6268d7f75f30", "force": true, "format": "json"}]: dispatch Feb 23 05:07:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbb57434-352b-4604-82ba-6268d7f75f30, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:07:58 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:07:58 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbb57434-352b-4604-82ba-6268d7f75f30, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:07:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 19 KiB/s wr, 1 op/s Feb 23 05:07:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:07:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:08:00 localhost podman[328003]: 2026-02-23 10:08:00.013876028 +0000 UTC m=+0.085313879 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 23 05:08:00 localhost podman[328003]: 2026-02-23 10:08:00.024480803 +0000 UTC m=+0.095918654 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:08:00 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:08:00 localhost systemd[1]: tmp-crun.NBqLpE.mount: Deactivated successfully. Feb 23 05:08:00 localhost podman[328004]: 2026-02-23 10:08:00.076973199 +0000 UTC m=+0.144761269 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:08:00 localhost podman[328004]: 2026-02-23 10:08:00.115812967 +0000 UTC m=+0.183601027 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb) Feb 23 05:08:00 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:08:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:01 localhost nova_compute[280321]: 2026-02-23 10:08:01.409 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:08:01 localhost nova_compute[280321]: 2026-02-23 10:08:01.411 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:08:01 localhost nova_compute[280321]: 2026-02-23 10:08:01.411 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:08:01 localhost nova_compute[280321]: 2026-02-23 10:08:01.411 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:08:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 63 KiB/s wr, 3 op/s Feb 23 05:08:01 localhost nova_compute[280321]: 2026-02-23 10:08:01.441 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:01 localhost nova_compute[280321]: 2026-02-23 10:08:01.442 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:08:01 localhost openstack_network_exporter[243519]: ERROR 10:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:08:01 localhost openstack_network_exporter[243519]: Feb 23 05:08:01 localhost openstack_network_exporter[243519]: ERROR 10:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:08:01 localhost openstack_network_exporter[243519]: Feb 23 05:08:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "4660794f-8745-4cec-b528-d5d739724996_30b563b9-0106-405d-8770-640aaf2912d1", "force": true, "format": "json"}]: dispatch Feb 23 05:08:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4660794f-8745-4cec-b528-d5d739724996_30b563b9-0106-405d-8770-640aaf2912d1, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:08:03 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:08:03 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:08:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4660794f-8745-4cec-b528-d5d739724996_30b563b9-0106-405d-8770-640aaf2912d1, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:08:03 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "snap_name": "4660794f-8745-4cec-b528-d5d739724996", "force": true, "format": "json"}]: dispatch Feb 23 05:08:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4660794f-8745-4cec-b528-d5d739724996, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:08:03 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' Feb 23 05:08:03 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta.tmp' to config b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e/.meta' Feb 23 05:08:03 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4660794f-8745-4cec-b528-d5d739724996, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:08:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 494 B/s rd, 62 KiB/s wr, 4 op/s Feb 23 05:08:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:08:04 localhost podman[328040]: 2026-02-23 10:08:04.022586152 +0000 UTC m=+0.099401731 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 23 05:08:04 localhost podman[328040]: 2026-02-23 10:08:04.037901201 +0000 UTC m=+0.114716760 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 05:08:04 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:08:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_10:08:05 Feb 23 05:08:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 05:08:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 05:08:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['manila_data', 'vms', 'manila_metadata', 'volumes', '.mgr', 'backups', 'images'] Feb 23 05:08:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 05:08:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:08:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:08:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e260 e260: 6 total, 6 up, 6 in Feb 23 05:08:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:08:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:08:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:08:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:08:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 45 KiB/s wr, 2 op/s Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014869268216080402 of space, bias 1.0, pg target 0.2968897220477387 quantized to 32 (current 32) Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010850694444444444 quantized to 32 (current 32) Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:08:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0023260835427135677 of space, bias 4.0, pg target 1.8515625 quantized to 16 (current 16) Feb 23 05:08:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 05:08:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:08:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 05:08:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:08:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:08:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:08:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:08:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:08:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:08:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:08:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:06 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e261 e261: 6 total, 6 up, 6 in Feb 23 05:08:06 localhost nova_compute[280321]: 2026-02-23 10:08:06.443 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:08:06 localhost nova_compute[280321]: 2026-02-23 10:08:06.445 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:08:06 localhost nova_compute[280321]: 2026-02-23 10:08:06.445 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:08:06 localhost nova_compute[280321]: 2026-02-23 10:08:06.445 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:08:06 localhost nova_compute[280321]: 2026-02-23 10:08:06.469 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:06 localhost nova_compute[280321]: 2026-02-23 10:08:06.470 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:08:06 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3db593df-73a6-45c0-b329-a0101e95070e", "format": "json"}]: dispatch Feb 23 05:08:06 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3db593df-73a6-45c0-b329-a0101e95070e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:08:06 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3db593df-73a6-45c0-b329-a0101e95070e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 23 05:08:06 localhost ceph-f1fea371-cb69-578d-a3d0-b5c472a84b46-mgr-np0005626465-hlpkwo[285900]: 2026-02-23T10:08:06.516+0000 7fc3ba4ad640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3db593df-73a6-45c0-b329-a0101e95070e' of type subvolume Feb 23 05:08:06 localhost ceph-mgr[285904]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3db593df-73a6-45c0-b329-a0101e95070e' of type subvolume Feb 23 05:08:06 localhost ceph-mgr[285904]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3db593df-73a6-45c0-b329-a0101e95070e", "force": true, "format": "json"}]: dispatch Feb 23 05:08:06 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:08:06 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3db593df-73a6-45c0-b329-a0101e95070e'' moved to trashcan Feb 23 05:08:06 localhost ceph-mgr[285904]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 23 05:08:06 localhost ceph-mgr[285904]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3db593df-73a6-45c0-b329-a0101e95070e, vol_name:cephfs) < "" Feb 23 05:08:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 75 KiB/s wr, 5 op/s Feb 23 05:08:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 30 KiB/s wr, 2 op/s Feb 23 05:08:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:10 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:10.689 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:68:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:19:65:94:49:af'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:08:10 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:10.691 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 23 05:08:10 localhost nova_compute[280321]: 2026-02-23 10:08:10.692 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 73 KiB/s wr, 4 op/s Feb 23 05:08:11 localhost nova_compute[280321]: 2026-02-23 10:08:11.472 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:12 localhost podman[241086]: time="2026-02-23T10:08:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:08:12 localhost podman[241086]: @ - - [23/Feb/2026:10:08:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:08:12 localhost podman[241086]: @ - - [23/Feb/2026:10:08:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17833 "" "Go-http-client/1.1" Feb 23 05:08:13 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 e262: 6 total, 6 up, 6 in Feb 23 05:08:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 73 KiB/s wr, 4 op/s Feb 23 05:08:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 442 B/s rd, 63 KiB/s wr, 4 op/s Feb 23 05:08:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:16 localhost nova_compute[280321]: 2026-02-23 10:08:16.473 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:08:16 localhost nova_compute[280321]: 2026-02-23 10:08:16.474 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:08:16 localhost nova_compute[280321]: 2026-02-23 10:08:16.475 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:08:16 localhost nova_compute[280321]: 2026-02-23 10:08:16.475 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:08:16 localhost nova_compute[280321]: 2026-02-23 10:08:16.506 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:16 localhost nova_compute[280321]: 2026-02-23 10:08:16.506 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:08:17 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s Feb 23 05:08:17 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:17.694 161842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a05de4d1-e729-4c33-bedf-496279b1b686, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 23 05:08:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:08:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:08:18 localhost podman[328063]: 2026-02-23 10:08:18.011001036 +0000 UTC m=+0.085091404 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 23 05:08:18 localhost podman[328063]: 2026-02-23 10:08:18.024799867 +0000 UTC m=+0.098890285 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 23 05:08:18 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:08:18 localhost podman[328064]: 2026-02-23 10:08:18.108175788 +0000 UTC m=+0.179374807 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, release=1770267347, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=openstack_network_exporter) Feb 23 05:08:18 localhost podman[328064]: 2026-02-23 10:08:18.122762024 +0000 UTC m=+0.193961083 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, version=9.7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0. Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.130618) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61 Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841298130662, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2084, "num_deletes": 258, "total_data_size": 2997750, "memory_usage": 3036056, "flush_reason": "Manual Compaction"} Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started Feb 23 05:08:18 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841298141054, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 1470944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35460, "largest_seqno": 37538, "table_properties": {"data_size": 1464544, "index_size": 3293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17637, "raw_average_key_size": 21, "raw_value_size": 1450143, "raw_average_value_size": 1794, "num_data_blocks": 144, "num_entries": 808, "num_filter_entries": 808, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771841171, "oldest_key_time": 1771841171, "file_creation_time": 1771841298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}} Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 10578 microseconds, and 4997 cpu microseconds. Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.141182) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 1470944 bytes OK Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.141225) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.143265) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.143295) EVENT_LOG_v1 {"time_micros": 1771841298143286, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.143336) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2988184, prev total WAL file size 2988933, number of live WAL files 2. Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.144635) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323533' seq:72057594037927935, type:22 .. '6D6772737461740034353035' seq:0, type:0; will stop at (end) Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(1436KB)], [60(17MB)] Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841298144839, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 19819035, "oldest_snapshot_seqno": -1} Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 14548 keys, 18170261 bytes, temperature: kUnknown Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841298246249, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 18170261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18088016, "index_size": 44768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36421, "raw_key_size": 390711, "raw_average_key_size": 26, "raw_value_size": 17841575, "raw_average_value_size": 1226, "num_data_blocks": 1656, "num_entries": 14548, "num_filter_entries": 14548, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771841298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}} Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.246660) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 18170261 bytes Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.248479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.2 rd, 178.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 17.5 +0.0 blob) out(17.3 +0.0 blob), read-write-amplify(25.8) write-amplify(12.4) OK, records in: 15025, records dropped: 477 output_compression: NoCompression Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.248509) EVENT_LOG_v1 {"time_micros": 1771841298248494, "job": 36, "event": "compaction_finished", "compaction_time_micros": 101550, "compaction_time_cpu_micros": 50423, "output_level": 6, "num_output_files": 1, "total_output_size": 18170261, "num_input_records": 15025, "num_output_records": 14548, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841298249218, "job": 36, "event": "table_file_deletion", "file_number": 62} Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841298252574, "job": 36, "event": "table_file_deletion", "file_number": 60} Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.144499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.252735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.252745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.252749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.252754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:18 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:18.252757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:18 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:08:18.833 263679 INFO neutron.agent.linux.ip_lib [None req-40ce775b-7085-46b4-972a-72bf34e3fcd3 - - - - - -] Device tapc1b954bd-f8 cannot be used as it has no MAC address#033[00m Feb 23 05:08:18 localhost nova_compute[280321]: 2026-02-23 10:08:18.913 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:18 localhost kernel: device tapc1b954bd-f8 entered promiscuous mode Feb 23 05:08:18 localhost NetworkManager[5987]: [1771841298.9200] manager: (tapc1b954bd-f8): new Generic device (/org/freedesktop/NetworkManager/Devices/74) Feb 23 05:08:18 localhost ovn_controller[155966]: 2026-02-23T10:08:18Z|00443|binding|INFO|Claiming lport c1b954bd-f899-404a-85f0-a9e7dfe0767f for this chassis. Feb 23 05:08:18 localhost nova_compute[280321]: 2026-02-23 10:08:18.922 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:18 localhost ovn_controller[155966]: 2026-02-23T10:08:18Z|00444|binding|INFO|c1b954bd-f899-404a-85f0-a9e7dfe0767f: Claiming unknown Feb 23 05:08:18 localhost systemd-udevd[328116]: Network interface NamePolicy= disabled on kernel command line. Feb 23 05:08:18 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:18.934 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-318850d1-4e1e-4f92-a937-415ef6070b59', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-318850d1-4e1e-4f92-a937-415ef6070b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef8d6ac80bb4c688e6639db79eafe94', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f014d09f-121a-4353-8841-0f1925843bd2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c1b954bd-f899-404a-85f0-a9e7dfe0767f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:08:18 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:18.937 161842 INFO neutron.agent.ovn.metadata.agent [-] Port c1b954bd-f899-404a-85f0-a9e7dfe0767f in datapath 318850d1-4e1e-4f92-a937-415ef6070b59 bound to our chassis#033[00m Feb 23 05:08:18 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:18.944 161842 DEBUG neutron.agent.ovn.metadata.agent [-] Port 68003988-5fc2-4c0f-80b3-817d1f679f18 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 23 05:08:18 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:18.945 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 318850d1-4e1e-4f92-a937-415ef6070b59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:08:18 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:18.946 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[86786586-597a-46d9-a5fa-3aee948104be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:08:18 localhost journal[229268]: ethtool ioctl error on tapc1b954bd-f8: No such device Feb 23 05:08:18 localhost journal[229268]: ethtool ioctl error on tapc1b954bd-f8: No such device Feb 23 05:08:18 localhost ovn_controller[155966]: 2026-02-23T10:08:18Z|00445|binding|INFO|Setting lport c1b954bd-f899-404a-85f0-a9e7dfe0767f ovn-installed in OVS Feb 23 05:08:18 localhost ovn_controller[155966]: 2026-02-23T10:08:18Z|00446|binding|INFO|Setting lport c1b954bd-f899-404a-85f0-a9e7dfe0767f up in Southbound Feb 23 05:08:18 localhost nova_compute[280321]: 2026-02-23 10:08:18.962 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:18 localhost journal[229268]: ethtool ioctl error on tapc1b954bd-f8: No such device Feb 23 05:08:18 localhost journal[229268]: ethtool ioctl error on tapc1b954bd-f8: No such device Feb 23 05:08:18 localhost journal[229268]: ethtool ioctl error on tapc1b954bd-f8: No such device Feb 23 05:08:18 localhost journal[229268]: ethtool ioctl error on tapc1b954bd-f8: No such device Feb 23 05:08:18 localhost journal[229268]: ethtool ioctl error on tapc1b954bd-f8: No such device Feb 23 05:08:18 localhost journal[229268]: ethtool ioctl error on tapc1b954bd-f8: No such device Feb 23 05:08:18 localhost nova_compute[280321]: 2026-02-23 10:08:18.993 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:19 localhost nova_compute[280321]: 2026-02-23 10:08:19.019 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:19 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s Feb 23 05:08:19 localhost podman[328187]: Feb 23 05:08:19 localhost podman[328187]: 2026-02-23 10:08:19.857422878 +0000 UTC m=+0.096806193 container create 6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-318850d1-4e1e-4f92-a937-415ef6070b59, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 05:08:19 localhost systemd[1]: Started libpod-conmon-6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e.scope. Feb 23 05:08:19 localhost podman[328187]: 2026-02-23 10:08:19.812230234 +0000 UTC m=+0.051613579 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 23 05:08:19 localhost systemd[1]: Started libcrun container. Feb 23 05:08:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71b8aeafb17ecc72d6f3061085a9965d1c120cdedf968e603d2de1ed3f3577b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 23 05:08:19 localhost podman[328187]: 2026-02-23 10:08:19.929322046 +0000 UTC m=+0.168705351 container init 6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-318850d1-4e1e-4f92-a937-415ef6070b59, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:08:19 localhost podman[328187]: 2026-02-23 10:08:19.938699803 +0000 UTC m=+0.178083108 container start 6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-318850d1-4e1e-4f92-a937-415ef6070b59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216) Feb 23 05:08:19 localhost dnsmasq[328205]: started, version 2.85 cachesize 150 Feb 23 05:08:19 localhost dnsmasq[328205]: DNS service limited to local subnets Feb 23 05:08:19 localhost dnsmasq[328205]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 23 05:08:19 localhost dnsmasq[328205]: warning: no upstream servers configured Feb 23 05:08:19 localhost dnsmasq-dhcp[328205]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 23 05:08:19 localhost dnsmasq[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/addn_hosts - 0 addresses Feb 23 05:08:19 localhost dnsmasq-dhcp[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/host Feb 23 05:08:19 localhost dnsmasq-dhcp[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/opts Feb 23 05:08:20 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:08:20.084 263679 INFO neutron.agent.dhcp.agent [None req-09c1c4b9-c031-41f7-a59b-f7e1e0b592ba - - - - - -] DHCP configuration for ports {'f0436c28-f168-44c1-b6ed-1f6ebd32854c'} is completed#033[00m Feb 23 05:08:20 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:08:20.585 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:08:20Z, description=, device_id=bd79233f-62a2-4798-82cd-a3699b0b3389, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bd763695-1f01-4f13-88f3-1338226db10e, ip_allocation=immediate, mac_address=fa:16:3e:5c:f2:9e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:08:16Z, description=, dns_domain=, id=318850d1-4e1e-4f92-a937-415ef6070b59, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingNegativeTest-569676473-network, port_security_enabled=True, project_id=6ef8d6ac80bb4c688e6639db79eafe94, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8646, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3943, status=ACTIVE, subnets=['ff356a57-1b97-4153-a68f-f5d9007d796d'], tags=[], tenant_id=6ef8d6ac80bb4c688e6639db79eafe94, updated_at=2026-02-23T10:08:17Z, vlan_transparent=None, network_id=318850d1-4e1e-4f92-a937-415ef6070b59, port_security_enabled=False, project_id=6ef8d6ac80bb4c688e6639db79eafe94, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3950, status=DOWN, tags=[], tenant_id=6ef8d6ac80bb4c688e6639db79eafe94, updated_at=2026-02-23T10:08:20Z on network 318850d1-4e1e-4f92-a937-415ef6070b59#033[00m Feb 23 05:08:20 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:20 localhost dnsmasq[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/addn_hosts - 1 addresses Feb 23 05:08:20 localhost dnsmasq-dhcp[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/host Feb 23 05:08:20 localhost podman[328222]: 2026-02-23 10:08:20.775677371 +0000 UTC m=+0.054500937 container kill 6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-318850d1-4e1e-4f92-a937-415ef6070b59, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:08:20 localhost dnsmasq-dhcp[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/opts Feb 23 05:08:21 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:08:21.025 263679 INFO neutron.agent.dhcp.agent [None req-f71def6c-94cd-4481-b563-5b96afb91d84 - - - - - -] DHCP configuration for ports {'bd763695-1f01-4f13-88f3-1338226db10e'} is completed#033[00m Feb 23 05:08:21 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 8.8 KiB/s wr, 0 op/s Feb 23 05:08:21 localhost nova_compute[280321]: 2026-02-23 10:08:21.508 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:21 localhost nova_compute[280321]: 2026-02-23 10:08:21.512 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:08:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:08:22 localhost podman[328242]: 2026-02-23 10:08:22.007290689 +0000 UTC m=+0.081699310 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:08:22 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:08:22.027 263679 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-23T10:08:20Z, description=, device_id=bd79233f-62a2-4798-82cd-a3699b0b3389, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bd763695-1f01-4f13-88f3-1338226db10e, ip_allocation=immediate, mac_address=fa:16:3e:5c:f2:9e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-23T10:08:16Z, description=, dns_domain=, id=318850d1-4e1e-4f92-a937-415ef6070b59, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingNegativeTest-569676473-network, port_security_enabled=True, project_id=6ef8d6ac80bb4c688e6639db79eafe94, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8646, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3943, status=ACTIVE, subnets=['ff356a57-1b97-4153-a68f-f5d9007d796d'], tags=[], tenant_id=6ef8d6ac80bb4c688e6639db79eafe94, updated_at=2026-02-23T10:08:17Z, vlan_transparent=None, network_id=318850d1-4e1e-4f92-a937-415ef6070b59, port_security_enabled=False, project_id=6ef8d6ac80bb4c688e6639db79eafe94, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3950, status=DOWN, tags=[], tenant_id=6ef8d6ac80bb4c688e6639db79eafe94, updated_at=2026-02-23T10:08:20Z on network 318850d1-4e1e-4f92-a937-415ef6070b59#033[00m Feb 23 05:08:22 localhost podman[328242]: 2026-02-23 10:08:22.07695141 +0000 UTC m=+0.151360041 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Feb 23 05:08:22 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:08:22 localhost dnsmasq[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/addn_hosts - 1 addresses Feb 23 05:08:22 localhost podman[328284]: 2026-02-23 10:08:22.225838643 +0000 UTC m=+0.057069306 container kill 6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-318850d1-4e1e-4f92-a937-415ef6070b59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216) Feb 23 05:08:22 localhost dnsmasq-dhcp[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/host Feb 23 05:08:22 localhost dnsmasq-dhcp[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/opts Feb 23 05:08:22 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:08:22.758 263679 INFO neutron.agent.dhcp.agent [None req-05b7491a-5351-4501-8fce-cc2e4596738c - - - - - -] DHCP configuration for ports {'bd763695-1f01-4f13-88f3-1338226db10e'} is completed#033[00m Feb 23 05:08:23 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 8.4 KiB/s wr, 0 op/s Feb 23 05:08:23 localhost ovn_controller[155966]: 2026-02-23T10:08:23Z|00447|ovn_bfd|INFO|Enabled BFD on interface ovn-5b0126-0 Feb 23 05:08:23 localhost ovn_controller[155966]: 2026-02-23T10:08:23Z|00448|ovn_bfd|INFO|Enabled BFD on interface ovn-585d62-0 Feb 23 05:08:23 localhost ovn_controller[155966]: 2026-02-23T10:08:23Z|00449|ovn_bfd|INFO|Enabled BFD on interface ovn-b9c72d-0 Feb 23 05:08:23 localhost nova_compute[280321]: 2026-02-23 10:08:23.498 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:23 localhost nova_compute[280321]: 2026-02-23 10:08:23.512 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:23 localhost nova_compute[280321]: 2026-02-23 10:08:23.517 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:23 localhost nova_compute[280321]: 2026-02-23 10:08:23.540 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:23 localhost nova_compute[280321]: 2026-02-23 10:08:23.547 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:23 localhost nova_compute[280321]: 2026-02-23 10:08:23.612 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:24 localhost nova_compute[280321]: 2026-02-23 10:08:24.483 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:25 localhost sshd[328307]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:08:25 localhost nova_compute[280321]: 2026-02-23 10:08:25.234 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:25 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.2 KiB/s wr, 0 op/s Feb 23 05:08:25 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:25 localhost ovn_controller[155966]: 2026-02-23T10:08:25Z|00450|ovn_bfd|INFO|Disabled BFD on interface ovn-5b0126-0 Feb 23 05:08:25 localhost ovn_controller[155966]: 2026-02-23T10:08:25Z|00451|ovn_bfd|INFO|Disabled BFD on interface ovn-585d62-0 Feb 23 05:08:25 localhost ovn_controller[155966]: 2026-02-23T10:08:25Z|00452|ovn_bfd|INFO|Disabled BFD on interface ovn-b9c72d-0 Feb 23 05:08:25 localhost nova_compute[280321]: 2026-02-23 10:08:25.636 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:25 localhost nova_compute[280321]: 2026-02-23 10:08:25.651 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:25 localhost dnsmasq[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/addn_hosts - 0 addresses Feb 23 05:08:25 localhost dnsmasq-dhcp[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/host Feb 23 05:08:25 localhost podman[328325]: 2026-02-23 10:08:25.868186522 +0000 UTC m=+0.065110712 container kill 6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-318850d1-4e1e-4f92-a937-415ef6070b59, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 23 05:08:25 localhost dnsmasq-dhcp[328205]: read /var/lib/neutron/dhcp/318850d1-4e1e-4f92-a937-415ef6070b59/opts Feb 23 05:08:25 localhost systemd[1]: tmp-crun.eXk7sD.mount: Deactivated successfully. Feb 23 05:08:26 localhost nova_compute[280321]: 2026-02-23 10:08:26.040 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:26 localhost ovn_controller[155966]: 2026-02-23T10:08:26Z|00453|binding|INFO|Releasing lport c1b954bd-f899-404a-85f0-a9e7dfe0767f from this chassis (sb_readonly=0) Feb 23 05:08:26 localhost ovn_controller[155966]: 2026-02-23T10:08:26Z|00454|binding|INFO|Setting lport c1b954bd-f899-404a-85f0-a9e7dfe0767f down in Southbound Feb 23 05:08:26 localhost kernel: device tapc1b954bd-f8 left promiscuous mode Feb 23 05:08:26 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:26.048 161842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005626465.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp1de4d0e1-c40a-5f01-845a-39a56208b92c-318850d1-4e1e-4f92-a937-415ef6070b59', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-318850d1-4e1e-4f92-a937-415ef6070b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef8d6ac80bb4c688e6639db79eafe94', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005626465.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f014d09f-121a-4353-8841-0f1925843bd2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c1b954bd-f899-404a-85f0-a9e7dfe0767f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 23 05:08:26 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:26.050 161842 INFO neutron.agent.ovn.metadata.agent [-] Port c1b954bd-f899-404a-85f0-a9e7dfe0767f in datapath 318850d1-4e1e-4f92-a937-415ef6070b59 unbound from our chassis#033[00m Feb 23 05:08:26 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:26.053 161842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 318850d1-4e1e-4f92-a937-415ef6070b59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 23 05:08:26 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:26.053 306186 DEBUG oslo.privsep.daemon [-] privsep: reply[03769a1d-b5c5-49e9-8f42-8a45f0b08b9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 23 05:08:26 localhost nova_compute[280321]: 2026-02-23 10:08:26.066 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:26 localhost nova_compute[280321]: 2026-02-23 10:08:26.513 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:27 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.2 KiB/s wr, 0 op/s Feb 23 05:08:28 localhost podman[328365]: 2026-02-23 10:08:28.193745977 +0000 UTC m=+0.066174774 container kill 6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-318850d1-4e1e-4f92-a937-415ef6070b59, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0) Feb 23 05:08:28 localhost systemd[1]: tmp-crun.JVl6UF.mount: Deactivated successfully. Feb 23 05:08:28 localhost dnsmasq[328205]: exiting on receipt of SIGTERM Feb 23 05:08:28 localhost systemd[1]: libpod-6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e.scope: Deactivated successfully. Feb 23 05:08:28 localhost podman[328379]: 2026-02-23 10:08:28.265262825 +0000 UTC m=+0.054425846 container died 6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-318850d1-4e1e-4f92-a937-415ef6070b59, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:08:28 localhost systemd[1]: tmp-crun.LRUpla.mount: Deactivated successfully. Feb 23 05:08:28 localhost podman[328379]: 2026-02-23 10:08:28.302629268 +0000 UTC m=+0.091792249 container cleanup 6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-318850d1-4e1e-4f92-a937-415ef6070b59, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 23 05:08:28 localhost systemd[1]: libpod-conmon-6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e.scope: Deactivated successfully. Feb 23 05:08:28 localhost podman[328380]: 2026-02-23 10:08:28.353308037 +0000 UTC m=+0.139844868 container remove 6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-318850d1-4e1e-4f92-a937-415ef6070b59, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS) Feb 23 05:08:28 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:08:28.970 263679 INFO neutron.agent.dhcp.agent [None req-5fcaa2f9-e6b8-4938-be5b-56afaab2fcd6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:08:29 localhost neutron_dhcp_agent[263675]: 2026-02-23 10:08:29.099 263679 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 23 05:08:29 localhost systemd[1]: var-lib-containers-storage-overlay-f71b8aeafb17ecc72d6f3061085a9965d1c120cdedf968e603d2de1ed3f3577b-merged.mount: Deactivated successfully. Feb 23 05:08:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6134c8ae4330b06bd86a6f368b366b3cd221725301d46ca4f3e905c1bc62174e-userdata-shm.mount: Deactivated successfully. Feb 23 05:08:29 localhost systemd[1]: run-netns-qdhcp\x2d318850d1\x2d4e1e\x2d4f92\x2da937\x2d415ef6070b59.mount: Deactivated successfully. Feb 23 05:08:29 localhost nova_compute[280321]: 2026-02-23 10:08:29.267 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:29 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.1 KiB/s wr, 0 op/s Feb 23 05:08:30 localhost sshd[328406]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:08:30 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:08:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:08:30 localhost podman[328408]: 2026-02-23 10:08:30.811010944 +0000 UTC m=+0.086096674 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 23 05:08:30 localhost podman[328408]: 2026-02-23 10:08:30.820975808 +0000 UTC m=+0.096061538 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 23 05:08:30 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:08:30 localhost podman[328409]: 2026-02-23 10:08:30.873108103 +0000 UTC m=+0.145512161 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 23 05:08:30 localhost nova_compute[280321]: 2026-02-23 10:08:30.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:08:30 localhost podman[328409]: 2026-02-23 10:08:30.911947231 +0000 UTC m=+0.184351259 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Feb 23 05:08:30 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:08:31 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.1 KiB/s wr, 0 op/s Feb 23 05:08:31 localhost nova_compute[280321]: 2026-02-23 10:08:31.515 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:31 localhost openstack_network_exporter[243519]: ERROR 10:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:08:31 localhost openstack_network_exporter[243519]: Feb 23 05:08:31 localhost openstack_network_exporter[243519]: ERROR 10:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:08:31 localhost openstack_network_exporter[243519]: Feb 23 05:08:33 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:08:33 localhost nova_compute[280321]: 2026-02-23 10:08:33.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:08:33 localhost nova_compute[280321]: 2026-02-23 10:08:33.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:08:33 localhost nova_compute[280321]: 2026-02-23 10:08:33.909 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:08:33 localhost nova_compute[280321]: 2026-02-23 10:08:33.910 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:08:33 localhost nova_compute[280321]: 2026-02-23 10:08:33.910 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:08:33 localhost nova_compute[280321]: 2026-02-23 10:08:33.910 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Auditing locally available compute resources for np0005626465.localdomain (node: np0005626465.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 23 05:08:33 localhost nova_compute[280321]: 2026-02-23 10:08:33.911 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:08:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:08:34 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/723157173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:08:34 localhost nova_compute[280321]: 2026-02-23 10:08:34.347 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:08:34 localhost nova_compute[280321]: 2026-02-23 10:08:34.473 280325 WARNING nova.virt.libvirt.driver [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 23 05:08:34 localhost nova_compute[280321]: 2026-02-23 10:08:34.474 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Hypervisor/Node resource view: name=np0005626465.localdomain free_ram=11526MB free_disk=41.8366584777832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 23 05:08:34 localhost nova_compute[280321]: 2026-02-23 10:08:34.474 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:08:34 localhost nova_compute[280321]: 2026-02-23 10:08:34.475 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:08:34 localhost nova_compute[280321]: 2026-02-23 10:08:34.533 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 23 05:08:34 localhost nova_compute[280321]: 2026-02-23 10:08:34.533 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Final resource view: name=np0005626465.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 23 05:08:34 localhost nova_compute[280321]: 2026-02-23 10:08:34.554 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 23 05:08:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:08:34 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 23 05:08:34 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1665956422' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 23 05:08:35 localhost nova_compute[280321]: 2026-02-23 10:08:35.002 280325 DEBUG oslo_concurrency.processutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 23 05:08:35 localhost podman[328490]: 2026-02-23 10:08:35.006092125 +0000 UTC m=+0.078405439 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 23 05:08:35 localhost nova_compute[280321]: 2026-02-23 10:08:35.010 280325 DEBUG nova.compute.provider_tree [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 9df77b74-d7d6-46a8-93cb-cadec85557a4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 23 05:08:35 localhost podman[328490]: 2026-02-23 10:08:35.016832354 +0000 UTC m=+0.089145688 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 23 05:08:35 localhost nova_compute[280321]: 2026-02-23 10:08:35.025 280325 DEBUG nova.scheduler.client.report [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Inventory has not changed for provider 9df77b74-d7d6-46a8-93cb-cadec85557a4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 23 05:08:35 localhost nova_compute[280321]: 2026-02-23 10:08:35.028 280325 DEBUG nova.compute.resource_tracker [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Compute_service record updated for np0005626465.localdomain:np0005626465.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 23 05:08:35 localhost nova_compute[280321]: 2026-02-23 10:08:35.028 280325 DEBUG oslo_concurrency.lockutils [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:08:35 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:08:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:08:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:08:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:08:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:08:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:08:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Feb 23 05:08:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 23 05:08:35 localhost ceph-mgr[285904]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 23 05:08:35 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:08:35 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:36 localhost nova_compute[280321]: 2026-02-23 10:08:36.029 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:08:36 localhost nova_compute[280321]: 2026-02-23 10:08:36.029 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:08:36 localhost nova_compute[280321]: 2026-02-23 10:08:36.030 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 23 05:08:36 localhost nova_compute[280321]: 2026-02-23 10:08:36.030 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 23 05:08:36 localhost nova_compute[280321]: 2026-02-23 10:08:36.044 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 23 05:08:36 localhost nova_compute[280321]: 2026-02-23 10:08:36.045 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:08:36 localhost nova_compute[280321]: 2026-02-23 10:08:36.045 280325 DEBUG nova.compute.manager [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 23 05:08:36 localhost nova_compute[280321]: 2026-02-23 10:08:36.518 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:36 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 23 05:08:36 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 23 05:08:36 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 23 05:08:36 localhost ceph-mon[296755]: log_channel(audit) log [INF] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:08:36 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 23 05:08:36 localhost ceph-mgr[285904]: [progress INFO root] update: starting ev cc4e81c9-4bdd-4fe2-aea4-5d109865adab (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:08:36 localhost ceph-mgr[285904]: [progress INFO root] complete: finished ev cc4e81c9-4bdd-4fe2-aea4-5d109865adab (Updating node-proxy deployment (+3 -> 3)) Feb 23 05:08:36 localhost ceph-mgr[285904]: [progress INFO root] Completed event cc4e81c9-4bdd-4fe2-aea4-5d109865adab (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 23 05:08:36 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 23 05:08:36 localhost ceph-mon[296755]: log_channel(audit) log [DBG] : from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 23 05:08:37 localhost ceph-mon[296755]: from='mgr.27078 172.18.0.107:0/3454997775' entity='mgr.np0005626465.hlpkwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 23 05:08:37 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:08:37 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s wr, 0 op/s Feb 23 05:08:38 localhost nova_compute[280321]: 2026-02-23 10:08:38.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:08:39 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s wr, 0 op/s Feb 23 05:08:39 localhost nova_compute[280321]: 2026-02-23 10:08:39.891 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:08:39 localhost nova_compute[280321]: 2026-02-23 10:08:39.892 280325 DEBUG oslo_service.periodic_task [None req-022bb347-33eb-47b6-9661-496c238197c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 23 05:08:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:40 localhost ceph-mgr[285904]: [progress INFO root] Writing back 50 completed events Feb 23 05:08:40 localhost ceph-mon[296755]: mon.np0005626465@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 23 05:08:41 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s wr, 0 op/s Feb 23 05:08:41 localhost nova_compute[280321]: 2026-02-23 10:08:41.520 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:08:41 localhost nova_compute[280321]: 2026-02-23 10:08:41.521 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:41 localhost nova_compute[280321]: 2026-02-23 10:08:41.521 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:08:41 localhost nova_compute[280321]: 2026-02-23 10:08:41.521 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:08:41 localhost nova_compute[280321]: 2026-02-23 10:08:41.522 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:08:41 localhost nova_compute[280321]: 2026-02-23 10:08:41.524 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:41 localhost ceph-mon[296755]: from='mgr.27078 ' entity='mgr.np0005626465.hlpkwo' Feb 23 05:08:42 localhost podman[241086]: time="2026-02-23T10:08:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:08:42 localhost podman[241086]: @ - - [23/Feb/2026:10:08:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:08:42 localhost podman[241086]: @ - - [23/Feb/2026:10:08:42 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17825 "" "Go-http-client/1.1" Feb 23 05:08:43 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s wr, 0 op/s Feb 23 05:08:45 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s wr, 0 op/s Feb 23 05:08:45 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:46 localhost nova_compute[280321]: 2026-02-23 10:08:46.522 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:46 localhost nova_compute[280321]: 2026-02-23 10:08:46.524 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:47 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s wr, 0 op/s Feb 23 05:08:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:48.324 161842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 23 05:08:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:48.324 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 23 05:08:48 localhost ovn_metadata_agent[161837]: 2026-02-23 10:08:48.325 161842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 23 05:08:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e. Feb 23 05:08:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb. Feb 23 05:08:49 localhost podman[328602]: 2026-02-23 10:08:49.030078657 +0000 UTC m=+0.094788950 container health_status 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 23 05:08:49 localhost podman[328602]: 2026-02-23 10:08:49.043756495 +0000 UTC m=+0.108466788 container exec_died 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 23 05:08:49 localhost podman[328603]: 2026-02-23 10:08:49.077676383 +0000 UTC m=+0.136901839 container health_status d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, release=1770267347, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git) Feb 23 05:08:49 localhost systemd[1]: 4712d4acc9f38074974a155d2533b6e7e9985baad27f2fc9423ca12f03ebbd3e.service: Deactivated successfully. Feb 23 05:08:49 localhost podman[328603]: 2026-02-23 10:08:49.16784985 +0000 UTC m=+0.227075276 container exec_died d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9/ubi-minimal, version=9.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, release=1770267347, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, vcs-type=git) Feb 23 05:08:49 localhost systemd[1]: d5aa989c1220a73a966a2abfc1450787688dc11d0b093b624a73fee1925204cb.service: Deactivated successfully. Feb 23 05:08:49 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:08:50 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:51 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:08:51 localhost nova_compute[280321]: 2026-02-23 10:08:51.524 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:51 localhost nova_compute[280321]: 2026-02-23 10:08:51.526 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02. Feb 23 05:08:53 localhost podman[328644]: 2026-02-23 10:08:53.003010736 +0000 UTC m=+0.080712169 container health_status bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0) Feb 23 05:08:53 localhost podman[328644]: 2026-02-23 10:08:53.044960419 +0000 UTC m=+0.122661832 container exec_died bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216) Feb 23 05:08:53 localhost systemd[1]: bb1f7c2fafe829ec416f3b9bc2016224b5fe011671de770243b55c60572f8f02.service: Deactivated successfully. Feb 23 05:08:53 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:08:55 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:08:55 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.094 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.095 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.096 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost ceilometer_agent_compute[236424]: 2026-02-23 10:08:56.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 23 05:08:56 localhost nova_compute[280321]: 2026-02-23 10:08:56.525 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0. Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.551420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64 Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841336551509, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 713, "num_deletes": 251, "total_data_size": 1071023, "memory_usage": 1087432, "flush_reason": "Manual Compaction"} Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841336557933, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 704350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37543, "largest_seqno": 38251, "table_properties": {"data_size": 701154, "index_size": 1115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8123, "raw_average_key_size": 20, "raw_value_size": 694445, "raw_average_value_size": 1718, "num_data_blocks": 50, "num_entries": 404, "num_filter_entries": 404, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771841298, "oldest_key_time": 1771841298, "file_creation_time": 1771841336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}} Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 6530 microseconds, and 2806 cpu microseconds. Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.557976) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 704350 bytes OK Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.557999) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.559483) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.559501) EVENT_LOG_v1 {"time_micros": 1771841336559495, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.559519) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1067137, prev total WAL file size 1067137, number of live WAL files 2. Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.560163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end) Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(687KB)], [63(17MB)] Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841336560243, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 18874611, "oldest_snapshot_seqno": -1} Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 14430 keys, 17438002 bytes, temperature: kUnknown Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841336644293, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 17438002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17358115, "index_size": 42730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36101, "raw_key_size": 388746, "raw_average_key_size": 26, "raw_value_size": 17115238, "raw_average_value_size": 1186, "num_data_blocks": 1566, "num_entries": 14430, "num_filter_entries": 14430, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771840070, "oldest_key_time": 0, "file_creation_time": 1771841336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "99129c6f-4568-4fd0-9cbc-0028c2eeda30", "db_session_id": "Q93LGWE7XWLY0N7QX9GA", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}} Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.644764) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 17438002 bytes Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.646558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.9 rd, 206.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 17.3 +0.0 blob) out(16.6 +0.0 blob), read-write-amplify(51.6) write-amplify(24.8) OK, records in: 14952, records dropped: 522 output_compression: NoCompression Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.646586) EVENT_LOG_v1 {"time_micros": 1771841336646574, "job": 38, "event": "compaction_finished", "compaction_time_micros": 84295, "compaction_time_cpu_micros": 50886, "output_level": 6, "num_output_files": 1, "total_output_size": 17438002, "num_input_records": 14952, "num_output_records": 14430, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841336646808, "job": 38, "event": "table_file_deletion", "file_number": 65} Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005626465/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771841336649585, "job": 38, "event": "table_file_deletion", "file_number": 63} Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.560041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.649631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.649637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.649640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.649643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:56 localhost ceph-mon[296755]: rocksdb: (Original Log Time 2026/02/23-10:08:56.649646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 23 05:08:57 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:08:59 localhost sshd[328669]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:08:59 localhost systemd-logind[759]: New session 73 of user zuul. Feb 23 05:08:59 localhost systemd[1]: Started Session 73 of User zuul. Feb 23 05:08:59 localhost ovn_controller[155966]: 2026-02-23T10:08:59Z|00455|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory Feb 23 05:08:59 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:08:59 localhost python3[328691]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f788-b23e-00000000000c-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 23 05:09:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 05:09:00 localhost ceph-osd[31709]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 17K writes, 63K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s#012Cumulative WAL: 17K writes, 5817 syncs, 3.04 writes per sync, written: 0.04 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9236 writes, 30K keys, 9236 commit groups, 1.0 writes per commit group, ingest: 15.75 MB, 0.03 MB/s#012Interval WAL: 9236 writes, 3695 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 05:09:00 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:09:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db. Feb 23 05:09:01 localhost systemd[1]: tmp-crun.DOzKc3.mount: Deactivated successfully. Feb 23 05:09:01 localhost podman[328694]: 2026-02-23 10:09:01.011558741 +0000 UTC m=+0.080900246 container health_status 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 23 05:09:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e. Feb 23 05:09:01 localhost podman[328694]: 2026-02-23 10:09:01.047007066 +0000 UTC m=+0.116348601 container exec_died 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260216, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Feb 23 05:09:01 localhost systemd[1]: 2fbdda8202a34300334db0a4a08955ffefa75c04b4b8b8db884ae01190bc74db.service: Deactivated successfully. Feb 23 05:09:01 localhost podman[328713]: 2026-02-23 10:09:01.102332138 +0000 UTC m=+0.063068861 container health_status dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute) Feb 23 05:09:01 localhost podman[328713]: 2026-02-23 10:09:01.140810754 +0000 UTC m=+0.101547487 container exec_died dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'a07a4436e1d1ca1c6231f11309616a78b1ad9830450b5c2d2fc3fb113cfbf838-f9cb5df06ec43ed01dbb95332757c0162b8db6bedce733674eeb204e590699b1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2) Feb 23 05:09:01 localhost systemd[1]: dcda27c2ebf0c1bc4e5d83cd2a932792509f47cc63d2bc8de30e14534b384e7e.service: Deactivated successfully. Feb 23 05:09:01 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:09:01 localhost nova_compute[280321]: 2026-02-23 10:09:01.528 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:09:01 localhost openstack_network_exporter[243519]: ERROR 10:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 23 05:09:01 localhost openstack_network_exporter[243519]: Feb 23 05:09:01 localhost openstack_network_exporter[243519]: ERROR 10:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 23 05:09:01 localhost openstack_network_exporter[243519]: Feb 23 05:09:03 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:09:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 23 05:09:04 localhost ceph-osd[32652]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 26K writes, 95K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.01 MB/s#012Cumulative WAL: 26K writes, 9479 syncs, 2.77 writes per sync, written: 0.09 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 14K writes, 48K keys, 14K commit groups, 1.0 writes per commit group, ingest: 54.02 MB, 0.09 MB/s#012Interval WAL: 14K writes, 5841 syncs, 2.40 writes per sync, written: 0.05 GB, 0.09 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 23 05:09:05 localhost ceph-mgr[285904]: [balancer INFO root] Optimize plan auto_2026-02-23_10:09:05 Feb 23 05:09:05 localhost ceph-mgr[285904]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 23 05:09:05 localhost ceph-mgr[285904]: [balancer INFO root] do_upmap Feb 23 05:09:05 localhost ceph-mgr[285904]: [balancer INFO root] pools ['.mgr', 'backups', 'manila_data', 'volumes', 'vms', 'manila_metadata', 'images'] Feb 23 05:09:05 localhost ceph-mgr[285904]: [balancer INFO root] prepared 0/10 changes Feb 23 05:09:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:09:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:09:05 localhost systemd[1]: session-73.scope: Deactivated successfully. Feb 23 05:09:05 localhost systemd-logind[759]: Session 73 logged out. Waiting for processes to exit. Feb 23 05:09:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d. Feb 23 05:09:05 localhost systemd-logind[759]: Removed session 73. Feb 23 05:09:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:09:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:09:05 localhost podman[328733]: 2026-02-23 10:09:05.246922544 +0000 UTC m=+0.079074169 container health_status 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 23 05:09:05 localhost podman[328733]: 2026-02-23 10:09:05.260553431 +0000 UTC m=+0.092705056 container exec_died 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 23 05:09:05 localhost systemd[1]: 7c5c673de27d608e389c5da953d6bc119352580736a2ac8befea188be149541d.service: Deactivated successfully. Feb 23 05:09:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] scanning for idle connections.. Feb 23 05:09:05 localhost ceph-mgr[285904]: [volumes INFO mgr_util] cleaning up connections: [] Feb 23 05:09:05 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] _maybe_adjust Feb 23 05:09:05 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033329080297319932 of space, bias 1.0, pg target 0.6665816059463986 quantized to 32 (current 32) Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014869268216080402 of space, bias 1.0, pg target 0.2968897220477387 quantized to 32 (current 32) Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 23 05:09:05 localhost ceph-mgr[285904]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0023705223164433276 of space, bias 4.0, pg target 1.8869357638888886 quantized to 16 (current 16) Feb 23 05:09:05 localhost ceph-mgr[285904]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 23 05:09:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:09:05 localhost ceph-mgr[285904]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 23 05:09:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 23 05:09:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:09:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:09:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 23 05:09:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: images, start_after= Feb 23 05:09:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:09:05 localhost ceph-mgr[285904]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 23 05:09:06 localhost nova_compute[280321]: 2026-02-23 10:09:06.529 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:09:06 localhost sshd[328757]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:09:07 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:09:09 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:09:10 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:09:11 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:09:11 localhost nova_compute[280321]: 2026-02-23 10:09:11.532 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:09:12 localhost podman[241086]: time="2026-02-23T10:09:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 23 05:09:12 localhost podman[241086]: @ - - [23/Feb/2026:10:09:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154070 "" "Go-http-client/1.1" Feb 23 05:09:12 localhost podman[241086]: @ - - [23/Feb/2026:10:09:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17828 "" "Go-http-client/1.1" Feb 23 05:09:13 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:09:15 localhost ceph-mgr[285904]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 23 05:09:15 localhost ceph-mon[296755]: mon.np0005626465@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 23 05:09:16 localhost nova_compute[280321]: 2026-02-23 10:09:16.533 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 23 05:09:16 localhost nova_compute[280321]: 2026-02-23 10:09:16.535 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:09:16 localhost nova_compute[280321]: 2026-02-23 10:09:16.535 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 23 05:09:16 localhost nova_compute[280321]: 2026-02-23 10:09:16.535 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:09:16 localhost nova_compute[280321]: 2026-02-23 10:09:16.536 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 23 05:09:16 localhost nova_compute[280321]: 2026-02-23 10:09:16.539 280325 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 23 05:09:17 localhost sshd[328759]: main: sshd: ssh-rsa algorithm is disabled Feb 23 05:09:17 localhost systemd-logind[759]: New session 74 of user zuul. Feb 23 05:09:17 localhost systemd[1]: Started Session 74 of User zuul.